00:00:00.002 Started by upstream project "autotest-nightly" build number 4308 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3671 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.253 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.254 The recommended git tool is: git 00:00:00.254 using credential 00000000-0000-0000-0000-000000000002 00:00:00.256 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.279 Fetching changes from the remote Git repository 00:00:00.281 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.299 Using shallow fetch with depth 1 00:00:00.299 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.299 > git --version # timeout=10 00:00:00.320 > git --version # 'git version 2.39.2' 00:00:00.320 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.333 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.333 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.399 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.409 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.420 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.420 > git config core.sparsecheckout # timeout=10 00:00:07.432 > git read-tree -mu HEAD # timeout=10 00:00:07.448 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.469 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.469 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.551 [Pipeline] Start of Pipeline 00:00:07.563 [Pipeline] library 00:00:07.565 Loading library shm_lib@master 00:00:07.565 Library shm_lib@master is cached. Copying from home. 00:00:07.582 [Pipeline] node 00:00:07.596 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.597 [Pipeline] { 00:00:07.611 [Pipeline] catchError 00:00:07.613 [Pipeline] { 00:00:07.629 [Pipeline] wrap 00:00:07.640 [Pipeline] { 00:00:07.647 [Pipeline] stage 00:00:07.650 [Pipeline] { (Prologue) 00:00:07.837 [Pipeline] sh 00:00:08.120 + logger -p user.info -t JENKINS-CI 00:00:08.135 [Pipeline] echo 00:00:08.137 Node: WFP21 00:00:08.143 [Pipeline] sh 00:00:08.440 [Pipeline] setCustomBuildProperty 00:00:08.453 [Pipeline] echo 00:00:08.455 Cleanup processes 00:00:08.460 [Pipeline] sh 00:00:08.745 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.745 3040319 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.756 [Pipeline] sh 00:00:09.042 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.042 ++ awk '{print $1}' 00:00:09.042 ++ grep -v 'sudo pgrep' 00:00:09.042 + sudo kill -9 00:00:09.042 + true 00:00:09.055 [Pipeline] cleanWs 00:00:09.063 [WS-CLEANUP] Deleting project workspace... 00:00:09.063 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.070 [WS-CLEANUP] done 00:00:09.074 [Pipeline] setCustomBuildProperty 00:00:09.085 [Pipeline] sh 00:00:09.366 + sudo git config --global --replace-all safe.directory '*' 00:00:09.459 [Pipeline] httpRequest 00:00:09.839 [Pipeline] echo 00:00:09.841 Sorcerer 10.211.164.20 is alive 00:00:09.852 [Pipeline] retry 00:00:09.854 [Pipeline] { 00:00:09.871 [Pipeline] httpRequest 00:00:09.876 HttpMethod: GET 00:00:09.876 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.877 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.910 Response Code: HTTP/1.1 200 OK 00:00:09.911 Success: Status code 200 is in the accepted range: 200,404 00:00:09.911 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.191 [Pipeline] } 00:00:22.209 [Pipeline] // retry 00:00:22.217 [Pipeline] sh 00:00:22.519 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.538 [Pipeline] httpRequest 00:00:22.919 [Pipeline] echo 00:00:22.921 Sorcerer 10.211.164.20 is alive 00:00:22.929 [Pipeline] retry 00:00:22.931 [Pipeline] { 00:00:22.944 [Pipeline] httpRequest 00:00:22.948 HttpMethod: GET 00:00:22.948 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:22.949 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:22.973 Response Code: HTTP/1.1 200 OK 00:00:22.974 Success: Status code 200 is in the accepted range: 200,404 00:00:22.974 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:01:28.279 [Pipeline] } 00:01:28.296 [Pipeline] // retry 00:01:28.303 [Pipeline] sh 00:01:28.590 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:01:31.141 [Pipeline] sh 00:01:31.426 + git -C spdk log --oneline -n5 00:01:31.426 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:01:31.426 5592070b3 doc: update nvmf_tracing.md 00:01:31.426 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:31.426 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:31.426 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:01:31.437 [Pipeline] } 00:01:31.451 [Pipeline] // stage 00:01:31.460 [Pipeline] stage 00:01:31.463 [Pipeline] { (Prepare) 00:01:31.479 [Pipeline] writeFile 00:01:31.495 [Pipeline] sh 00:01:31.781 + logger -p user.info -t JENKINS-CI 00:01:31.795 [Pipeline] sh 00:01:32.082 + logger -p user.info -t JENKINS-CI 00:01:32.097 [Pipeline] sh 00:01:32.383 + cat autorun-spdk.conf 00:01:32.384 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.384 SPDK_TEST_NVMF=1 00:01:32.384 SPDK_TEST_NVME_CLI=1 00:01:32.384 SPDK_TEST_NVMF_NICS=mlx5 00:01:32.384 SPDK_RUN_ASAN=1 00:01:32.384 SPDK_RUN_UBSAN=1 00:01:32.384 NET_TYPE=phy 00:01:32.392 RUN_NIGHTLY=1 00:01:32.397 [Pipeline] readFile 00:01:32.423 [Pipeline] withEnv 00:01:32.426 [Pipeline] { 00:01:32.439 [Pipeline] sh 00:01:32.727 + set -ex 00:01:32.727 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:32.727 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:32.727 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.727 ++ SPDK_TEST_NVMF=1 00:01:32.727 ++ SPDK_TEST_NVME_CLI=1 00:01:32.727 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:32.727 ++ SPDK_RUN_ASAN=1 00:01:32.727 ++ SPDK_RUN_UBSAN=1 00:01:32.727 ++ NET_TYPE=phy 00:01:32.727 ++ RUN_NIGHTLY=1 00:01:32.727 + case $SPDK_TEST_NVMF_NICS in 00:01:32.727 + DRIVERS=mlx5_ib 00:01:32.727 + [[ -n mlx5_ib ]] 00:01:32.727 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:32.727 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:39.306 rmmod: ERROR: Module irdma is not currently loaded 00:01:39.306 rmmod: ERROR: Module i40iw is not currently loaded 00:01:39.306 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:39.306 + true 00:01:39.306 + for D in $DRIVERS 00:01:39.306 + sudo modprobe mlx5_ib 00:01:39.306 + exit 0 00:01:39.316 [Pipeline] } 00:01:39.330 [Pipeline] // withEnv 00:01:39.335 [Pipeline] } 00:01:39.349 [Pipeline] // stage 00:01:39.358 [Pipeline] catchError 00:01:39.359 [Pipeline] { 00:01:39.373 [Pipeline] timeout 00:01:39.373 Timeout set to expire in 1 hr 0 min 00:01:39.375 [Pipeline] { 00:01:39.389 [Pipeline] stage 00:01:39.391 [Pipeline] { (Tests) 00:01:39.405 [Pipeline] sh 00:01:39.692 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:39.692 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:39.692 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:39.692 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:39.692 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:39.692 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:39.692 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:39.692 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:39.692 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:39.692 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:39.692 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:39.692 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:39.692 + source /etc/os-release 00:01:39.692 ++ NAME='Fedora Linux' 00:01:39.692 ++ VERSION='39 (Cloud Edition)' 00:01:39.692 ++ ID=fedora 00:01:39.692 ++ VERSION_ID=39 00:01:39.692 ++ VERSION_CODENAME= 00:01:39.692 ++ PLATFORM_ID=platform:f39 00:01:39.692 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:39.692 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:39.692 ++ LOGO=fedora-logo-icon 00:01:39.692 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:39.692 ++ HOME_URL=https://fedoraproject.org/ 00:01:39.692 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:39.692 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:39.692 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:39.692 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:39.692 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:39.692 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:39.692 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:39.692 ++ SUPPORT_END=2024-11-12 00:01:39.692 ++ VARIANT='Cloud Edition' 00:01:39.692 ++ VARIANT_ID=cloud 00:01:39.692 + uname -a 00:01:39.692 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:39.692 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:43.894 Hugepages 00:01:43.894 node hugesize free / total 00:01:43.894 node0 1048576kB 0 / 0 00:01:43.894 node0 2048kB 0 / 0 00:01:43.894 node1 1048576kB 0 / 0 00:01:43.894 node1 2048kB 0 / 0 00:01:43.894 00:01:43.894 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:43.894 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:43.894 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:43.894 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:43.894 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:43.894 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:43.894 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:43.894 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:43.894 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:43.894 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:43.894 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:43.894 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:43.894 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:43.894 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:43.894 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:43.894 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:43.894 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:43.894 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:43.894 + rm -f /tmp/spdk-ld-path 00:01:43.894 + source autorun-spdk.conf 00:01:43.894 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.894 ++ SPDK_TEST_NVMF=1 00:01:43.894 ++ SPDK_TEST_NVME_CLI=1 00:01:43.894 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:43.894 ++ SPDK_RUN_ASAN=1 00:01:43.894 ++ SPDK_RUN_UBSAN=1 00:01:43.894 ++ NET_TYPE=phy 00:01:43.894 ++ RUN_NIGHTLY=1 00:01:43.894 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:43.894 + [[ -n '' ]] 00:01:43.894 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:43.894 + for M in /var/spdk/build-*-manifest.txt 00:01:43.894 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:43.894 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:43.894 + for M in /var/spdk/build-*-manifest.txt 00:01:43.894 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:43.894 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:43.894 + for M in /var/spdk/build-*-manifest.txt 00:01:43.894 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:43.894 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:43.894 ++ uname 00:01:43.894 + [[ Linux == \L\i\n\u\x ]] 00:01:43.894 + sudo dmesg -T 00:01:43.894 + sudo dmesg --clear 00:01:43.894 + dmesg_pid=3041971 00:01:43.894 + sudo dmesg -Tw 00:01:43.894 + [[ Fedora Linux == FreeBSD ]] 00:01:43.894 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.894 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.894 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:43.894 + [[ -x /usr/src/fio-static/fio ]] 00:01:43.894 + export FIO_BIN=/usr/src/fio-static/fio 00:01:43.894 + FIO_BIN=/usr/src/fio-static/fio 00:01:43.894 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:43.894 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:43.894 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:43.894 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.895 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.895 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:43.895 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.895 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.895 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:43.895 05:18:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:43.895 05:18:40 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:43.895 05:18:40 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.895 05:18:40 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:43.895 05:18:40 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:43.895 05:18:40 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:43.895 05:18:40 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:43.895 05:18:40 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:43.895 05:18:40 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ NET_TYPE=phy 00:01:43.895 05:18:40 -- nvmf-phy-autotest/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:01:43.895 05:18:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:43.895 05:18:40 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:43.895 05:18:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:43.895 05:18:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:43.895 05:18:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:43.895 05:18:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.895 05:18:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.895 05:18:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.895 05:18:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.895 05:18:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.895 05:18:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.895 05:18:40 -- paths/export.sh@5 -- $ export PATH 00:01:43.895 05:18:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.895 05:18:40 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:43.895 05:18:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:43.895 05:18:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732681120.XXXXXX 00:01:43.895 05:18:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732681120.XimNVZ 00:01:43.895 05:18:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:43.895 05:18:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:43.895 05:18:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:43.895 05:18:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:43.895 05:18:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.895 05:18:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:43.895 05:18:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:43.895 05:18:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.895 05:18:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:43.895 05:18:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:43.895 05:18:40 -- pm/common@17 -- $ local monitor 00:01:43.895 05:18:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.895 05:18:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.895 05:18:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.895 05:18:40 -- pm/common@21 -- $ date +%s 00:01:43.895 05:18:40 -- pm/common@21 -- $ date +%s 00:01:43.895 05:18:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.895 05:18:40 -- pm/common@25 -- $ sleep 1 00:01:43.895 05:18:40 -- pm/common@21 -- $ date +%s 00:01:43.895 05:18:40 -- pm/common@21 -- $ date +%s 00:01:43.895 05:18:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732681120 00:01:43.895 05:18:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732681120 00:01:43.895 05:18:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732681120 00:01:43.895 05:18:40 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732681120 00:01:43.895 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732681120_collect-vmstat.pm.log 00:01:43.895 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732681120_collect-cpu-load.pm.log 00:01:43.895 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732681120_collect-cpu-temp.pm.log 00:01:43.895 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732681120_collect-bmc-pm.bmc.pm.log 00:01:44.835 05:18:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:44.835 05:18:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:44.835 05:18:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:44.835 05:18:41 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:44.835 05:18:41 -- spdk/autobuild.sh@16 -- $ date -u 00:01:44.835 Wed Nov 27 04:18:41 AM UTC 2024 00:01:44.835 05:18:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:44.835 v25.01-pre-271-g2f2acf4eb 00:01:44.835 05:18:41 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:44.835 05:18:41 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:44.835 05:18:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:44.835 05:18:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:44.835 05:18:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.835 ************************************ 00:01:44.835 START TEST asan 00:01:44.835 ************************************ 00:01:44.835 05:18:41 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:44.835 using asan 00:01:44.835 00:01:44.835 real 0m0.001s 00:01:44.835 user 0m0.000s 00:01:44.835 sys 0m0.000s 00:01:44.835 05:18:41 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:44.835 05:18:41 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.835 ************************************ 00:01:44.835 END TEST asan 00:01:44.835 ************************************ 00:01:44.835 05:18:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:44.835 05:18:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:44.836 05:18:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:44.836 05:18:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:44.836 05:18:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.096 ************************************ 00:01:45.096 START TEST ubsan 00:01:45.096 ************************************ 00:01:45.096 05:18:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:45.096 using ubsan 00:01:45.096 00:01:45.096 real 0m0.000s 00:01:45.096 user 0m0.000s 00:01:45.096 sys 0m0.000s 00:01:45.096 05:18:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:45.096 05:18:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:45.096 ************************************ 00:01:45.096 END TEST ubsan 00:01:45.096 ************************************ 00:01:45.096 05:18:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:45.096 05:18:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:45.096 05:18:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:45.096 05:18:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:45.096 05:18:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:45.096 05:18:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:45.096 05:18:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:45.096 05:18:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:45.096 05:18:41 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:45.096 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:45.096 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:45.666 Using 'verbs' RDMA provider 00:02:01.143 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:13.363 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:13.363 Creating mk/config.mk...done. 00:02:13.363 Creating mk/cc.flags.mk...done. 00:02:13.363 Type 'make' to build. 00:02:13.363 05:19:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:13.363 05:19:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:13.363 05:19:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:13.363 05:19:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.363 ************************************ 00:02:13.363 START TEST make 00:02:13.363 ************************************ 00:02:13.363 05:19:09 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:13.363 make[1]: Nothing to be done for 'all'. 00:02:21.483 The Meson build system 00:02:21.483 Version: 1.5.0 00:02:21.483 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:21.483 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:21.483 Build type: native build 00:02:21.483 Program cat found: YES (/usr/bin/cat) 00:02:21.483 Project name: DPDK 00:02:21.483 Project version: 24.03.0 00:02:21.483 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.483 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.483 Host machine cpu family: x86_64 00:02:21.483 Host machine cpu: x86_64 00:02:21.483 Message: ## Building in Developer Mode ## 00:02:21.483 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.483 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:21.483 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.483 Program python3 found: YES (/usr/bin/python3) 00:02:21.483 Program cat found: YES (/usr/bin/cat) 00:02:21.483 Compiler for C supports arguments -march=native: YES 00:02:21.483 Checking for size of "void *" : 8 00:02:21.483 Checking for size of "void *" : 8 (cached) 00:02:21.483 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:21.483 Library m found: YES 00:02:21.483 Library numa found: YES 00:02:21.483 Has header "numaif.h" : YES 00:02:21.483 Library fdt found: NO 00:02:21.483 Library execinfo found: NO 00:02:21.483 Has header "execinfo.h" : YES 00:02:21.483 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.483 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.483 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.483 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.483 Run-time dependency openssl found: YES 3.1.1 00:02:21.483 Run-time dependency libpcap found: YES 1.10.4 00:02:21.483 Has header "pcap.h" with dependency libpcap: YES 00:02:21.483 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.483 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.483 Compiler for C supports arguments -Wformat: YES 00:02:21.483 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.483 Compiler for C supports arguments -Wformat-security: NO 00:02:21.483 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.483 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.483 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.483 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.483 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.483 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.483 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.483 Compiler for C supports arguments -Wundef: YES 00:02:21.483 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.483 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.483 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.483 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.483 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.483 Program objdump found: YES (/usr/bin/objdump) 00:02:21.483 Compiler for C supports arguments -mavx512f: YES 00:02:21.483 Checking if "AVX512 checking" compiles: YES 00:02:21.483 Fetching value of define "__SSE4_2__" : 1 00:02:21.483 Fetching value of define "__AES__" : 1 00:02:21.483 Fetching value of define "__AVX__" : 1 00:02:21.483 Fetching value of define "__AVX2__" : 1 00:02:21.483 Fetching value of define "__AVX512BW__" : 1 00:02:21.483 Fetching value of define "__AVX512CD__" : 1 00:02:21.483 Fetching value of define "__AVX512DQ__" : 1 00:02:21.483 Fetching value of define "__AVX512F__" : 1 00:02:21.483 Fetching value of define "__AVX512VL__" : 1 00:02:21.483 Fetching value of define "__PCLMUL__" : 1 00:02:21.483 Fetching value of define "__RDRND__" : 1 00:02:21.484 Fetching value of define "__RDSEED__" : 1 00:02:21.484 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:21.484 Fetching value of define "__znver1__" : (undefined) 00:02:21.484 Fetching value of define "__znver2__" : (undefined) 00:02:21.484 Fetching value of define "__znver3__" : (undefined) 00:02:21.484 Fetching value of define "__znver4__" : (undefined) 00:02:21.484 Library asan found: YES 00:02:21.484 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.484 Message: lib/log: Defining dependency "log" 00:02:21.484 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.484 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.484 Library rt found: YES 00:02:21.484 Checking for function "getentropy" : NO 00:02:21.484 Message: lib/eal: Defining dependency "eal" 00:02:21.484 Message: lib/ring: Defining dependency "ring" 00:02:21.484 Message: lib/rcu: Defining dependency "rcu" 00:02:21.484 Message: lib/mempool: Defining dependency "mempool" 00:02:21.484 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.484 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.484 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.484 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.484 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.484 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:21.484 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:21.484 Compiler for C supports arguments -mpclmul: YES 00:02:21.484 Compiler for C supports arguments -maes: YES 00:02:21.484 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.484 Compiler for C supports arguments -mavx512bw: YES 00:02:21.484 Compiler for C supports arguments -mavx512dq: YES 00:02:21.484 Compiler for C supports arguments -mavx512vl: YES 00:02:21.484 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.484 Compiler for C supports arguments -mavx2: YES 00:02:21.484 Compiler for C supports arguments -mavx: YES 00:02:21.484 Message: lib/net: Defining dependency "net" 00:02:21.484 Message: lib/meter: Defining dependency "meter" 00:02:21.484 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.484 Message: lib/pci: Defining dependency "pci" 00:02:21.484 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.484 Message: lib/hash: Defining dependency "hash" 00:02:21.484 Message: lib/timer: Defining dependency "timer" 00:02:21.484 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.484 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.484 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.484 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.484 Message: lib/power: Defining dependency "power" 00:02:21.484 Message: lib/reorder: Defining dependency "reorder" 00:02:21.484 Message: lib/security: Defining dependency "security" 00:02:21.484 Has header "linux/userfaultfd.h" : YES 00:02:21.484 Has header "linux/vduse.h" : YES 00:02:21.484 Message: lib/vhost: Defining dependency "vhost" 00:02:21.484 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.484 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.484 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.484 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.484 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:21.484 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:21.484 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:21.484 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:21.484 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:21.484 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:21.484 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.484 Configuring doxy-api-html.conf using configuration 00:02:21.484 Configuring doxy-api-man.conf using configuration 00:02:21.484 Program mandb found: YES (/usr/bin/mandb) 00:02:21.484 Program sphinx-build found: NO 00:02:21.484 Configuring rte_build_config.h using configuration 00:02:21.484 Message: 00:02:21.484 ================= 00:02:21.484 Applications Enabled 00:02:21.484 ================= 00:02:21.484 00:02:21.484 apps: 00:02:21.484 00:02:21.484 00:02:21.484 Message: 00:02:21.484 ================= 00:02:21.484 Libraries Enabled 00:02:21.484 ================= 00:02:21.484 00:02:21.484 libs: 00:02:21.484 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.484 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:21.484 cryptodev, dmadev, power, reorder, security, vhost, 00:02:21.484 00:02:21.484 Message: 00:02:21.484 =============== 00:02:21.484 Drivers Enabled 00:02:21.484 =============== 00:02:21.484 00:02:21.484 common: 00:02:21.484 00:02:21.484 bus: 00:02:21.484 pci, vdev, 00:02:21.484 mempool: 00:02:21.484 ring, 00:02:21.484 dma: 00:02:21.484 00:02:21.484 net: 00:02:21.484 00:02:21.484 crypto: 00:02:21.484 00:02:21.484 compress: 00:02:21.484 00:02:21.484 vdpa: 00:02:21.484 00:02:21.484 00:02:21.484 Message: 00:02:21.484 ================= 00:02:21.484 Content Skipped 00:02:21.484 ================= 00:02:21.484 00:02:21.484 apps: 00:02:21.484 dumpcap: explicitly disabled via build config 00:02:21.484 graph: explicitly disabled via build config 00:02:21.484 pdump: explicitly disabled via build config 00:02:21.484 proc-info: explicitly disabled via build config 00:02:21.484 test-acl: explicitly disabled via build config 00:02:21.484 test-bbdev: explicitly disabled via build config 00:02:21.484 test-cmdline: explicitly disabled via build config 00:02:21.484 test-compress-perf: explicitly disabled via build config 00:02:21.484 test-crypto-perf: explicitly disabled via build config 00:02:21.484 test-dma-perf: explicitly disabled via build config 00:02:21.484 test-eventdev: explicitly disabled via build config 00:02:21.484 test-fib: explicitly disabled via build config 00:02:21.484 test-flow-perf: explicitly disabled via build config 00:02:21.484 test-gpudev: explicitly disabled via build config 00:02:21.484 test-mldev: explicitly disabled via build config 00:02:21.484 test-pipeline: explicitly disabled via build config 00:02:21.484 test-pmd: explicitly disabled via build config 00:02:21.484 test-regex: explicitly disabled via build config 00:02:21.484 test-sad: explicitly disabled via build config 00:02:21.484 test-security-perf: explicitly disabled via build config 00:02:21.484 00:02:21.484 libs: 00:02:21.484 argparse: explicitly disabled via build config 00:02:21.484 metrics: explicitly disabled via build config 00:02:21.484 acl: explicitly disabled via build config 00:02:21.484 bbdev: explicitly disabled via build config 00:02:21.484 bitratestats: explicitly disabled via build config 00:02:21.484 bpf: explicitly disabled via build config 00:02:21.484 cfgfile: explicitly disabled via build config 00:02:21.484 distributor: explicitly disabled via build config 00:02:21.484 efd: explicitly disabled via build config 00:02:21.484 eventdev: explicitly disabled via build config 00:02:21.484 dispatcher: explicitly disabled via build config 00:02:21.484 gpudev: explicitly disabled via build config 00:02:21.484 gro: explicitly disabled via build config 00:02:21.484 gso: explicitly disabled via build config 00:02:21.484 ip_frag: explicitly disabled via build config 00:02:21.484 jobstats: explicitly disabled via build config 00:02:21.484 latencystats: explicitly disabled via build config 00:02:21.484 lpm: explicitly disabled via build config 00:02:21.484 member: explicitly disabled via build config 00:02:21.484 pcapng: explicitly disabled via build config 00:02:21.484 rawdev: explicitly disabled via build config 00:02:21.484 regexdev: explicitly disabled via build config 00:02:21.484 mldev: explicitly disabled via build config 00:02:21.484 rib: explicitly disabled via build config 00:02:21.484 sched: explicitly disabled via build config 00:02:21.484 stack: explicitly disabled via build config 00:02:21.484 ipsec: explicitly disabled via build config 00:02:21.484 pdcp: explicitly disabled via build config 00:02:21.484 fib: explicitly disabled via build config 00:02:21.484 port: explicitly disabled via build config 00:02:21.484 pdump: explicitly disabled via build config 00:02:21.484 table: explicitly disabled via build config 00:02:21.484 pipeline: explicitly disabled via build config 00:02:21.484 graph: explicitly disabled via build config 00:02:21.484 node: explicitly disabled via build config 00:02:21.484 00:02:21.484 drivers: 00:02:21.484 common/cpt: not in enabled drivers build config 00:02:21.484 common/dpaax: not in enabled drivers build config 00:02:21.484 common/iavf: not in enabled drivers build config 00:02:21.484 common/idpf: not in enabled drivers build config 00:02:21.484 common/ionic: not in enabled drivers build config 00:02:21.484 common/mvep: not in enabled drivers build config 00:02:21.484 common/octeontx: not in enabled drivers build config 00:02:21.484 bus/auxiliary: not in enabled drivers build config 00:02:21.484 bus/cdx: not in enabled drivers build config 00:02:21.484 bus/dpaa: not in enabled drivers build config 00:02:21.484 bus/fslmc: not in enabled drivers build config 00:02:21.484 bus/ifpga: not in enabled drivers build config 00:02:21.484 bus/platform: not in enabled drivers build config 00:02:21.484 bus/uacce: not in enabled drivers build config 00:02:21.484 bus/vmbus: not in enabled drivers build config 00:02:21.484 common/cnxk: not in enabled drivers build config 00:02:21.484 common/mlx5: not in enabled drivers build config 00:02:21.484 common/nfp: not in enabled drivers build config 00:02:21.484 common/nitrox: not in enabled drivers build config 00:02:21.484 common/qat: not in enabled drivers build config 00:02:21.484 common/sfc_efx: not in enabled drivers build config 00:02:21.484 mempool/bucket: not in enabled drivers build config 00:02:21.484 mempool/cnxk: not in enabled drivers build config 00:02:21.484 mempool/dpaa: not in enabled drivers build config 00:02:21.484 mempool/dpaa2: not in enabled drivers build config 00:02:21.484 mempool/octeontx: not in enabled drivers build config 00:02:21.484 mempool/stack: not in enabled drivers build config 00:02:21.484 dma/cnxk: not in enabled drivers build config 00:02:21.484 dma/dpaa: not in enabled drivers build config 00:02:21.484 dma/dpaa2: not in enabled drivers build config 00:02:21.485 dma/hisilicon: not in enabled drivers build config 00:02:21.485 dma/idxd: not in enabled drivers build config 00:02:21.485 dma/ioat: not in enabled drivers build config 00:02:21.485 dma/skeleton: not in enabled drivers build config 00:02:21.485 net/af_packet: not in enabled drivers build config 00:02:21.485 net/af_xdp: not in enabled drivers build config 00:02:21.485 net/ark: not in enabled drivers build config 00:02:21.485 net/atlantic: not in enabled drivers build config 00:02:21.485 net/avp: not in enabled drivers build config 00:02:21.485 net/axgbe: not in enabled drivers build config 00:02:21.485 net/bnx2x: not in enabled drivers build config 00:02:21.485 net/bnxt: not in enabled drivers build config 00:02:21.485 net/bonding: not in enabled drivers build config 00:02:21.485 net/cnxk: not in enabled drivers build config 00:02:21.485 net/cpfl: not in enabled drivers build config 00:02:21.485 net/cxgbe: not in enabled drivers build config 00:02:21.485 net/dpaa: not in enabled drivers build config 00:02:21.485 net/dpaa2: not in enabled drivers build config 00:02:21.485 net/e1000: not in enabled drivers build config 00:02:21.485 net/ena: not in enabled drivers build config 00:02:21.485 net/enetc: not in enabled drivers build config 00:02:21.485 net/enetfec: not in enabled drivers build config 00:02:21.485 net/enic: not in enabled drivers build config 00:02:21.485 net/failsafe: not in enabled drivers build config 00:02:21.485 net/fm10k: not in enabled drivers build config 00:02:21.485 net/gve: not in enabled drivers build config 00:02:21.485 net/hinic: not in enabled drivers build config 00:02:21.485 net/hns3: not in enabled drivers build config 00:02:21.485 net/i40e: not in enabled drivers build config 00:02:21.485 net/iavf: not in enabled drivers build config 00:02:21.485 net/ice: not in enabled drivers build config 00:02:21.485 net/idpf: not in enabled drivers build config 00:02:21.485 net/igc: not in enabled drivers build config 00:02:21.485 net/ionic: not in enabled drivers build config 00:02:21.485 net/ipn3ke: not in enabled drivers build config 00:02:21.485 net/ixgbe: not in enabled drivers build config 00:02:21.485 net/mana: not in enabled drivers build config 00:02:21.485 net/memif: not in enabled drivers build config 00:02:21.485 net/mlx4: not in enabled drivers build config 00:02:21.485 net/mlx5: not in enabled drivers build config 00:02:21.485 net/mvneta: not in enabled drivers build config 00:02:21.485 net/mvpp2: not in enabled drivers build config 00:02:21.485 net/netvsc: not in enabled drivers build config 00:02:21.485 net/nfb: not in enabled drivers build config 00:02:21.485 net/nfp: not in enabled drivers build config 00:02:21.485 net/ngbe: not in enabled drivers build config 00:02:21.485 net/null: not in enabled drivers build config 00:02:21.485 net/octeontx: not in enabled drivers build config 00:02:21.485 net/octeon_ep: not in enabled drivers build config 00:02:21.485 net/pcap: not in enabled drivers build config 00:02:21.485 net/pfe: not in enabled drivers build config 00:02:21.485 net/qede: not in enabled drivers build config 00:02:21.485 net/ring: not in enabled drivers build config 00:02:21.485 net/sfc: not in enabled drivers build config 00:02:21.485 net/softnic: not in enabled drivers build config 00:02:21.485 net/tap: not in enabled drivers build config 00:02:21.485 net/thunderx: not in enabled drivers build config 00:02:21.485 net/txgbe: not in enabled drivers build config 00:02:21.485 net/vdev_netvsc: not in enabled drivers build config 00:02:21.485 net/vhost: not in enabled drivers build config 00:02:21.485 net/virtio: not in enabled drivers build config 00:02:21.485 net/vmxnet3: not in enabled drivers build config 00:02:21.485 raw/*: missing internal dependency, "rawdev" 00:02:21.485 crypto/armv8: not in enabled drivers build config 00:02:21.485 crypto/bcmfs: not in enabled drivers build config 00:02:21.485 crypto/caam_jr: not in enabled drivers build config 00:02:21.485 crypto/ccp: not in enabled drivers build config 00:02:21.485 crypto/cnxk: not in enabled drivers build config 00:02:21.485 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.485 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.485 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.485 crypto/mlx5: not in enabled drivers build config 00:02:21.485 crypto/mvsam: not in enabled drivers build config 00:02:21.485 crypto/nitrox: not in enabled drivers build config 00:02:21.485 crypto/null: not in enabled drivers build config 00:02:21.485 crypto/octeontx: not in enabled drivers build config 00:02:21.485 crypto/openssl: not in enabled drivers build config 00:02:21.485 crypto/scheduler: not in enabled drivers build config 00:02:21.485 crypto/uadk: not in enabled drivers build config 00:02:21.485 crypto/virtio: not in enabled drivers build config 00:02:21.485 compress/isal: not in enabled drivers build config 00:02:21.485 compress/mlx5: not in enabled drivers build config 00:02:21.485 compress/nitrox: not in enabled drivers build config 00:02:21.485 compress/octeontx: not in enabled drivers build config 00:02:21.485 compress/zlib: not in enabled drivers build config 00:02:21.485 regex/*: missing internal dependency, "regexdev" 00:02:21.485 ml/*: missing internal dependency, "mldev" 00:02:21.485 vdpa/ifc: not in enabled drivers build config 00:02:21.485 vdpa/mlx5: not in enabled drivers build config 00:02:21.485 vdpa/nfp: not in enabled drivers build config 00:02:21.485 vdpa/sfc: not in enabled drivers build config 00:02:21.485 event/*: missing internal dependency, "eventdev" 00:02:21.485 baseband/*: missing internal dependency, "bbdev" 00:02:21.485 gpu/*: missing internal dependency, "gpudev" 00:02:21.485 00:02:21.485 00:02:21.485 Build targets in project: 85 00:02:21.485 00:02:21.485 DPDK 24.03.0 00:02:21.485 00:02:21.485 User defined options 00:02:21.485 buildtype : debug 00:02:21.485 default_library : shared 00:02:21.485 libdir : lib 00:02:21.485 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:21.485 b_sanitize : address 00:02:21.485 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:21.485 c_link_args : 00:02:21.485 cpu_instruction_set: native 00:02:21.485 disable_apps : test-acl,graph,test-dma-perf,test-gpudev,test-crypto-perf,test,test-security-perf,test-mldev,proc-info,test-pmd,test-pipeline,test-eventdev,test-cmdline,test-fib,pdump,test-flow-perf,test-bbdev,test-regex,test-sad,dumpcap,test-compress-perf 00:02:21.485 disable_libs : acl,bitratestats,graph,bbdev,jobstats,ipsec,gso,table,rib,node,mldev,sched,ip_frag,cfgfile,port,pcapng,pdcp,argparse,stack,eventdev,regexdev,distributor,gro,efd,pipeline,bpf,dispatcher,lpm,metrics,latencystats,pdump,gpudev,member,fib,rawdev 00:02:21.485 enable_docs : false 00:02:21.485 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:21.485 enable_kmods : false 00:02:21.485 max_lcores : 128 00:02:21.485 tests : false 00:02:21.485 00:02:21.485 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.759 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:21.759 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.025 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:22.025 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:22.025 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:22.025 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:22.025 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:22.025 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:22.025 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:22.025 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:22.025 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:22.025 [11/268] Linking static target lib/librte_kvargs.a 00:02:22.025 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:22.025 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:22.025 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:22.025 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:22.025 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:22.025 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:22.025 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:22.025 [19/268] Linking static target lib/librte_log.a 00:02:22.025 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:22.025 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:22.025 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:22.025 [23/268] Linking static target lib/librte_pci.a 00:02:22.025 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:22.025 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:22.026 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.026 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:22.287 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:22.287 [29/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:22.287 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:22.287 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:22.287 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.287 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:22.287 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:22.287 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:22.548 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:22.548 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:22.548 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.548 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:22.548 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:22.548 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:22.548 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:22.548 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:22.548 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.549 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:22.549 [46/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:22.549 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.549 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:22.549 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.549 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.549 [51/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:22.549 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:22.549 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:22.549 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:22.549 [55/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:22.549 [56/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:22.549 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.549 [58/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:22.549 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:22.549 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:22.549 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:22.549 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.549 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:22.549 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:22.549 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.549 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:22.549 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.549 [68/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:22.549 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:22.549 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:22.549 [71/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.549 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:22.549 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.549 [74/268] Linking static target lib/librte_meter.a 00:02:22.549 [75/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.549 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:22.549 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.549 [78/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:22.549 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.549 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:22.549 [81/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.549 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.549 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.549 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.549 [85/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:22.549 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:22.549 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.549 [88/268] Linking static target lib/librte_telemetry.a 00:02:22.549 [89/268] Linking static target lib/librte_ring.a 00:02:22.549 [90/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.549 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:22.549 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:22.549 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.549 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.549 [95/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:22.549 [96/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.549 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:22.549 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:22.549 [99/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.549 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:22.549 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.549 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:22.549 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.549 [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.549 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:22.549 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:22.549 [107/268] Linking static target lib/librte_cmdline.a 00:02:22.549 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.549 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:22.549 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:22.549 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:22.549 [112/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.549 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:22.549 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:22.549 [115/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:22.549 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.549 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:22.549 [118/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.820 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.820 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.820 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:22.820 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:22.820 [123/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.820 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:22.820 [125/268] Linking static target lib/librte_timer.a 00:02:22.820 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:22.820 [127/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.820 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.820 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.820 [130/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:22.821 [131/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.821 [132/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:22.821 [133/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:22.821 [134/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:22.821 [135/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:22.821 [136/268] Linking static target lib/librte_mempool.a 00:02:22.821 [137/268] Linking static target lib/librte_net.a 00:02:22.821 [138/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:22.821 [139/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:22.821 [140/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.821 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.821 [142/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:22.821 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:22.821 [144/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:22.821 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:22.821 [146/268] Linking static target lib/librte_dmadev.a 00:02:22.821 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.821 [148/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.821 [149/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.821 [150/268] Linking static target lib/librte_compressdev.a 00:02:22.821 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.821 [152/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.821 [153/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:22.821 [154/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.821 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.821 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.821 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.821 [158/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:22.821 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.821 [160/268] Linking static target lib/librte_rcu.a 00:02:22.821 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.821 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:23.080 [163/268] Linking target lib/librte_log.so.24.1 00:02:23.080 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:23.080 [165/268] Linking static target lib/librte_eal.a 00:02:23.080 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.080 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:23.080 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:23.080 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:23.080 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.080 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:23.080 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.080 [173/268] Linking static target lib/librte_power.a 00:02:23.080 [174/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:23.080 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.080 [176/268] Linking static target lib/librte_reorder.a 00:02:23.080 [177/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:23.080 [178/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.080 [179/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.080 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:23.080 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.080 [182/268] Linking target lib/librte_kvargs.so.24.1 00:02:23.080 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.080 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.080 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.080 [186/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.080 [187/268] Linking static target drivers/librte_bus_vdev.a 00:02:23.080 [188/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.080 [189/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.080 [190/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.338 [191/268] Linking target lib/librte_telemetry.so.24.1 00:02:23.338 [192/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:23.338 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:23.338 [194/268] Linking static target lib/librte_mbuf.a 00:02:23.338 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:23.338 [196/268] Linking static target lib/librte_security.a 00:02:23.338 [197/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:23.338 [198/268] Linking static target lib/librte_hash.a 00:02:23.338 [199/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.338 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.338 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.338 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.338 [203/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:23.338 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.338 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.338 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.338 [207/268] Linking static target drivers/librte_mempool_ring.a 00:02:23.338 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.338 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.338 [210/268] Linking static target drivers/librte_bus_pci.a 00:02:23.597 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.597 [212/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.597 [213/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.597 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:23.597 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.597 [216/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:23.856 [217/268] Linking static target lib/librte_cryptodev.a 00:02:23.856 [218/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.856 [219/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.856 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.115 [221/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.115 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.373 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.373 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.373 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.373 [226/268] Linking static target lib/librte_ethdev.a 00:02:25.310 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.878 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.417 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.417 [230/268] Linking static target lib/librte_vhost.a 00:02:30.325 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.618 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.000 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.260 [234/268] Linking target lib/librte_eal.so.24.1 00:02:35.260 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:35.260 [236/268] Linking target lib/librte_meter.so.24.1 00:02:35.260 [237/268] Linking target lib/librte_pci.so.24.1 00:02:35.260 [238/268] Linking target lib/librte_timer.so.24.1 00:02:35.260 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:35.260 [240/268] Linking target lib/librte_ring.so.24.1 00:02:35.260 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:35.521 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:35.521 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:35.521 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:35.521 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:35.521 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:35.521 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:35.521 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:35.521 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:35.521 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.521 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.781 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.781 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:35.781 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.781 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:35.781 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:35.781 [257/268] Linking target lib/librte_net.so.24.1 00:02:35.781 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:36.041 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:36.041 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:36.041 [261/268] Linking target lib/librte_security.so.24.1 00:02:36.041 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:36.041 [263/268] Linking target lib/librte_hash.so.24.1 00:02:36.041 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:36.041 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:36.301 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:36.301 [267/268] Linking target lib/librte_power.so.24.1 00:02:36.301 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:36.301 INFO: autodetecting backend as ninja 00:02:36.301 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:44.436 CC lib/ut/ut.o 00:02:44.436 CC lib/log/log.o 00:02:44.436 CC lib/log/log_flags.o 00:02:44.436 CC lib/log/log_deprecated.o 00:02:44.436 CC lib/ut_mock/mock.o 00:02:44.436 LIB libspdk_ut.a 00:02:44.436 SO libspdk_ut.so.2.0 00:02:44.436 LIB libspdk_log.a 00:02:44.436 LIB libspdk_ut_mock.a 00:02:44.436 SYMLINK libspdk_ut.so 00:02:44.436 SO libspdk_log.so.7.1 00:02:44.436 SO libspdk_ut_mock.so.6.0 00:02:44.436 SYMLINK libspdk_log.so 00:02:44.436 SYMLINK libspdk_ut_mock.so 00:02:44.436 CC lib/util/bit_array.o 00:02:44.436 CC lib/util/base64.o 00:02:44.436 CC lib/util/cpuset.o 00:02:44.436 CC lib/util/crc16.o 00:02:44.436 CC lib/util/crc32.o 00:02:44.436 CC lib/util/crc32c.o 00:02:44.436 CC lib/util/crc32_ieee.o 00:02:44.436 CC lib/util/crc64.o 00:02:44.436 CC lib/util/file.o 00:02:44.436 CC lib/util/dif.o 00:02:44.436 CC lib/util/fd.o 00:02:44.436 CC lib/util/fd_group.o 00:02:44.436 CC lib/util/hexlify.o 00:02:44.436 CC lib/util/iov.o 00:02:44.436 CC lib/util/math.o 00:02:44.436 CC lib/util/net.o 00:02:44.436 CC lib/util/pipe.o 00:02:44.436 CC lib/util/uuid.o 00:02:44.436 CC lib/util/strerror_tls.o 00:02:44.436 CC lib/ioat/ioat.o 00:02:44.436 CC lib/util/string.o 00:02:44.436 CC lib/util/xor.o 00:02:44.436 CC lib/util/zipf.o 00:02:44.436 CC lib/util/md5.o 00:02:44.436 CXX lib/trace_parser/trace.o 00:02:44.436 CC lib/dma/dma.o 00:02:44.436 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.436 CC lib/vfio_user/host/vfio_user.o 00:02:44.436 LIB libspdk_dma.a 00:02:44.436 LIB libspdk_ioat.a 00:02:44.436 SO libspdk_dma.so.5.0 00:02:44.436 SO libspdk_ioat.so.7.0 00:02:44.436 SYMLINK libspdk_dma.so 00:02:44.436 SYMLINK libspdk_ioat.so 00:02:44.436 LIB libspdk_vfio_user.a 00:02:44.436 SO libspdk_vfio_user.so.5.0 00:02:44.436 SYMLINK libspdk_vfio_user.so 00:02:44.436 LIB libspdk_util.a 00:02:44.436 SO libspdk_util.so.10.1 00:02:44.436 LIB libspdk_trace_parser.a 00:02:44.697 SO libspdk_trace_parser.so.6.0 00:02:44.697 SYMLINK libspdk_util.so 00:02:44.697 SYMLINK libspdk_trace_parser.so 00:02:44.958 CC lib/vmd/vmd.o 00:02:44.958 CC lib/vmd/led.o 00:02:44.958 CC lib/env_dpdk/memory.o 00:02:44.958 CC lib/env_dpdk/env.o 00:02:44.958 CC lib/conf/conf.o 00:02:44.958 CC lib/env_dpdk/pci.o 00:02:44.958 CC lib/env_dpdk/init.o 00:02:44.958 CC lib/env_dpdk/threads.o 00:02:44.958 CC lib/env_dpdk/pci_ioat.o 00:02:44.958 CC lib/env_dpdk/pci_virtio.o 00:02:44.958 CC lib/env_dpdk/pci_vmd.o 00:02:44.958 CC lib/env_dpdk/pci_idxd.o 00:02:44.958 CC lib/env_dpdk/pci_event.o 00:02:44.958 CC lib/env_dpdk/sigbus_handler.o 00:02:44.958 CC lib/env_dpdk/pci_dpdk.o 00:02:44.958 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.958 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.958 CC lib/rdma_utils/rdma_utils.o 00:02:44.958 CC lib/json/json_parse.o 00:02:44.958 CC lib/json/json_util.o 00:02:44.958 CC lib/json/json_write.o 00:02:44.958 CC lib/idxd/idxd.o 00:02:44.958 CC lib/idxd/idxd_user.o 00:02:44.958 CC lib/idxd/idxd_kernel.o 00:02:45.218 LIB libspdk_conf.a 00:02:45.218 SO libspdk_conf.so.6.0 00:02:45.218 LIB libspdk_rdma_utils.a 00:02:45.218 LIB libspdk_json.a 00:02:45.218 SO libspdk_rdma_utils.so.1.0 00:02:45.218 SYMLINK libspdk_conf.so 00:02:45.218 SO libspdk_json.so.6.0 00:02:45.478 SYMLINK libspdk_rdma_utils.so 00:02:45.478 SYMLINK libspdk_json.so 00:02:45.478 LIB libspdk_idxd.a 00:02:45.478 LIB libspdk_vmd.a 00:02:45.737 SO libspdk_idxd.so.12.1 00:02:45.737 SO libspdk_vmd.so.6.0 00:02:45.737 CC lib/rdma_provider/common.o 00:02:45.737 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:45.737 SYMLINK libspdk_vmd.so 00:02:45.737 SYMLINK libspdk_idxd.so 00:02:45.737 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.737 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.737 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.738 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.997 LIB libspdk_rdma_provider.a 00:02:45.997 SO libspdk_rdma_provider.so.7.0 00:02:45.998 LIB libspdk_jsonrpc.a 00:02:45.998 SYMLINK libspdk_rdma_provider.so 00:02:45.998 SO libspdk_jsonrpc.so.6.0 00:02:45.998 SYMLINK libspdk_jsonrpc.so 00:02:46.257 LIB libspdk_env_dpdk.a 00:02:46.257 SO libspdk_env_dpdk.so.15.1 00:02:46.517 CC lib/rpc/rpc.o 00:02:46.517 SYMLINK libspdk_env_dpdk.so 00:02:46.776 LIB libspdk_rpc.a 00:02:46.776 SO libspdk_rpc.so.6.0 00:02:46.776 SYMLINK libspdk_rpc.so 00:02:47.036 CC lib/trace/trace.o 00:02:47.036 CC lib/trace/trace_flags.o 00:02:47.036 CC lib/trace/trace_rpc.o 00:02:47.036 CC lib/notify/notify.o 00:02:47.036 CC lib/notify/notify_rpc.o 00:02:47.036 CC lib/keyring/keyring.o 00:02:47.036 CC lib/keyring/keyring_rpc.o 00:02:47.295 LIB libspdk_notify.a 00:02:47.295 LIB libspdk_trace.a 00:02:47.295 SO libspdk_notify.so.6.0 00:02:47.295 SO libspdk_trace.so.11.0 00:02:47.295 LIB libspdk_keyring.a 00:02:47.295 SYMLINK libspdk_notify.so 00:02:47.295 SO libspdk_keyring.so.2.0 00:02:47.295 SYMLINK libspdk_trace.so 00:02:47.555 SYMLINK libspdk_keyring.so 00:02:47.815 CC lib/thread/thread.o 00:02:47.815 CC lib/thread/iobuf.o 00:02:47.815 CC lib/sock/sock.o 00:02:47.815 CC lib/sock/sock_rpc.o 00:02:48.075 LIB libspdk_sock.a 00:02:48.075 SO libspdk_sock.so.10.0 00:02:48.335 SYMLINK libspdk_sock.so 00:02:48.594 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:48.594 CC lib/nvme/nvme_ctrlr.o 00:02:48.594 CC lib/nvme/nvme_fabric.o 00:02:48.594 CC lib/nvme/nvme_pcie_common.o 00:02:48.594 CC lib/nvme/nvme_ns_cmd.o 00:02:48.594 CC lib/nvme/nvme_ns.o 00:02:48.594 CC lib/nvme/nvme.o 00:02:48.594 CC lib/nvme/nvme_pcie.o 00:02:48.594 CC lib/nvme/nvme_qpair.o 00:02:48.594 CC lib/nvme/nvme_quirks.o 00:02:48.594 CC lib/nvme/nvme_transport.o 00:02:48.594 CC lib/nvme/nvme_discovery.o 00:02:48.594 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:48.594 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:48.594 CC lib/nvme/nvme_tcp.o 00:02:48.594 CC lib/nvme/nvme_opal.o 00:02:48.594 CC lib/nvme/nvme_io_msg.o 00:02:48.594 CC lib/nvme/nvme_poll_group.o 00:02:48.594 CC lib/nvme/nvme_zns.o 00:02:48.594 CC lib/nvme/nvme_stubs.o 00:02:48.594 CC lib/nvme/nvme_cuse.o 00:02:48.594 CC lib/nvme/nvme_auth.o 00:02:48.594 CC lib/nvme/nvme_rdma.o 00:02:49.163 LIB libspdk_thread.a 00:02:49.163 SO libspdk_thread.so.11.0 00:02:49.163 SYMLINK libspdk_thread.so 00:02:49.732 CC lib/init/json_config.o 00:02:49.732 CC lib/accel/accel.o 00:02:49.732 CC lib/init/subsystem.o 00:02:49.732 CC lib/accel/accel_rpc.o 00:02:49.732 CC lib/init/subsystem_rpc.o 00:02:49.732 CC lib/accel/accel_sw.o 00:02:49.732 CC lib/init/rpc.o 00:02:49.732 CC lib/fsdev/fsdev_io.o 00:02:49.732 CC lib/virtio/virtio.o 00:02:49.732 CC lib/fsdev/fsdev.o 00:02:49.732 CC lib/blob/blobstore.o 00:02:49.732 CC lib/virtio/virtio_vhost_user.o 00:02:49.732 CC lib/blob/request.o 00:02:49.732 CC lib/fsdev/fsdev_rpc.o 00:02:49.732 CC lib/blob/zeroes.o 00:02:49.732 CC lib/virtio/virtio_vfio_user.o 00:02:49.732 CC lib/virtio/virtio_pci.o 00:02:49.732 CC lib/blob/blob_bs_dev.o 00:02:49.732 LIB libspdk_init.a 00:02:49.991 SO libspdk_init.so.6.0 00:02:49.991 LIB libspdk_virtio.a 00:02:49.991 SYMLINK libspdk_init.so 00:02:49.991 SO libspdk_virtio.so.7.0 00:02:49.991 SYMLINK libspdk_virtio.so 00:02:50.250 LIB libspdk_fsdev.a 00:02:50.250 SO libspdk_fsdev.so.2.0 00:02:50.250 CC lib/event/app.o 00:02:50.250 CC lib/event/reactor.o 00:02:50.250 CC lib/event/log_rpc.o 00:02:50.250 CC lib/event/app_rpc.o 00:02:50.250 CC lib/event/scheduler_static.o 00:02:50.250 SYMLINK libspdk_fsdev.so 00:02:50.509 LIB libspdk_accel.a 00:02:50.509 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:50.509 LIB libspdk_nvme.a 00:02:50.768 SO libspdk_accel.so.16.0 00:02:50.768 LIB libspdk_event.a 00:02:50.768 SYMLINK libspdk_accel.so 00:02:50.768 SO libspdk_event.so.14.0 00:02:50.768 SO libspdk_nvme.so.15.0 00:02:50.768 SYMLINK libspdk_event.so 00:02:51.027 SYMLINK libspdk_nvme.so 00:02:51.027 CC lib/bdev/bdev.o 00:02:51.027 CC lib/bdev/bdev_rpc.o 00:02:51.027 CC lib/bdev/bdev_zone.o 00:02:51.027 CC lib/bdev/part.o 00:02:51.027 CC lib/bdev/scsi_nvme.o 00:02:51.286 LIB libspdk_fuse_dispatcher.a 00:02:51.286 SO libspdk_fuse_dispatcher.so.1.0 00:02:51.286 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.666 LIB libspdk_blob.a 00:02:52.666 SO libspdk_blob.so.12.0 00:02:52.666 SYMLINK libspdk_blob.so 00:02:52.924 CC lib/lvol/lvol.o 00:02:52.924 CC lib/blobfs/blobfs.o 00:02:52.924 CC lib/blobfs/tree.o 00:02:53.494 LIB libspdk_bdev.a 00:02:53.494 SO libspdk_bdev.so.17.0 00:02:53.753 SYMLINK libspdk_bdev.so 00:02:53.753 LIB libspdk_blobfs.a 00:02:53.753 SO libspdk_blobfs.so.11.0 00:02:53.753 LIB libspdk_lvol.a 00:02:53.753 SO libspdk_lvol.so.11.0 00:02:53.753 SYMLINK libspdk_blobfs.so 00:02:54.012 SYMLINK libspdk_lvol.so 00:02:54.012 CC lib/ublk/ublk.o 00:02:54.012 CC lib/ublk/ublk_rpc.o 00:02:54.012 CC lib/ftl/ftl_core.o 00:02:54.012 CC lib/ftl/ftl_init.o 00:02:54.012 CC lib/ftl/ftl_layout.o 00:02:54.012 CC lib/ftl/ftl_debug.o 00:02:54.012 CC lib/ftl/ftl_io.o 00:02:54.012 CC lib/ftl/ftl_sb.o 00:02:54.012 CC lib/ftl/ftl_l2p.o 00:02:54.012 CC lib/scsi/dev.o 00:02:54.012 CC lib/nbd/nbd_rpc.o 00:02:54.012 CC lib/nbd/nbd.o 00:02:54.012 CC lib/scsi/lun.o 00:02:54.012 CC lib/nvmf/ctrlr.o 00:02:54.012 CC lib/ftl/ftl_l2p_flat.o 00:02:54.012 CC lib/scsi/port.o 00:02:54.012 CC lib/nvmf/ctrlr_discovery.o 00:02:54.012 CC lib/ftl/ftl_nv_cache.o 00:02:54.012 CC lib/scsi/scsi.o 00:02:54.012 CC lib/scsi/scsi_bdev.o 00:02:54.012 CC lib/nvmf/ctrlr_bdev.o 00:02:54.012 CC lib/ftl/ftl_band.o 00:02:54.012 CC lib/nvmf/nvmf_rpc.o 00:02:54.012 CC lib/nvmf/subsystem.o 00:02:54.012 CC lib/scsi/scsi_pr.o 00:02:54.012 CC lib/nvmf/nvmf.o 00:02:54.012 CC lib/ftl/ftl_band_ops.o 00:02:54.012 CC lib/scsi/scsi_rpc.o 00:02:54.012 CC lib/ftl/ftl_writer.o 00:02:54.012 CC lib/scsi/task.o 00:02:54.012 CC lib/nvmf/transport.o 00:02:54.012 CC lib/ftl/ftl_rq.o 00:02:54.012 CC lib/nvmf/tcp.o 00:02:54.012 CC lib/ftl/ftl_reloc.o 00:02:54.012 CC lib/ftl/ftl_l2p_cache.o 00:02:54.012 CC lib/nvmf/stubs.o 00:02:54.012 CC lib/nvmf/mdns_server.o 00:02:54.012 CC lib/ftl/ftl_p2l.o 00:02:54.012 CC lib/ftl/ftl_p2l_log.o 00:02:54.012 CC lib/nvmf/rdma.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt.o 00:02:54.012 CC lib/nvmf/auth.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:54.012 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:54.012 CC lib/ftl/utils/ftl_conf.o 00:02:54.012 CC lib/ftl/utils/ftl_md.o 00:02:54.012 CC lib/ftl/utils/ftl_mempool.o 00:02:54.012 CC lib/ftl/utils/ftl_bitmap.o 00:02:54.012 CC lib/ftl/utils/ftl_property.o 00:02:54.012 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:54.012 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:54.012 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:54.012 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:54.012 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:54.012 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:54.012 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:54.012 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:54.012 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:54.012 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:54.012 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:54.012 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:54.012 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:54.012 CC lib/ftl/base/ftl_base_dev.o 00:02:54.012 CC lib/ftl/base/ftl_base_bdev.o 00:02:54.012 CC lib/ftl/ftl_trace.o 00:02:54.947 LIB libspdk_nbd.a 00:02:54.947 SO libspdk_nbd.so.7.0 00:02:54.947 LIB libspdk_scsi.a 00:02:54.947 SYMLINK libspdk_nbd.so 00:02:54.947 LIB libspdk_ublk.a 00:02:54.947 SO libspdk_scsi.so.9.0 00:02:54.947 SO libspdk_ublk.so.3.0 00:02:54.947 SYMLINK libspdk_scsi.so 00:02:54.947 SYMLINK libspdk_ublk.so 00:02:55.205 LIB libspdk_ftl.a 00:02:55.205 CC lib/vhost/vhost.o 00:02:55.205 CC lib/vhost/vhost_rpc.o 00:02:55.205 CC lib/vhost/vhost_scsi.o 00:02:55.205 CC lib/vhost/vhost_blk.o 00:02:55.205 CC lib/vhost/rte_vhost_user.o 00:02:55.205 CC lib/iscsi/conn.o 00:02:55.205 CC lib/iscsi/iscsi.o 00:02:55.205 CC lib/iscsi/init_grp.o 00:02:55.205 CC lib/iscsi/param.o 00:02:55.205 CC lib/iscsi/portal_grp.o 00:02:55.205 CC lib/iscsi/tgt_node.o 00:02:55.205 CC lib/iscsi/iscsi_subsystem.o 00:02:55.205 CC lib/iscsi/iscsi_rpc.o 00:02:55.205 CC lib/iscsi/task.o 00:02:55.463 SO libspdk_ftl.so.9.0 00:02:55.722 SYMLINK libspdk_ftl.so 00:02:56.289 LIB libspdk_vhost.a 00:02:56.289 SO libspdk_vhost.so.8.0 00:02:56.289 SYMLINK libspdk_vhost.so 00:02:56.289 LIB libspdk_nvmf.a 00:02:56.289 SO libspdk_nvmf.so.20.0 00:02:56.548 LIB libspdk_iscsi.a 00:02:56.548 SYMLINK libspdk_nvmf.so 00:02:56.548 SO libspdk_iscsi.so.8.0 00:02:56.807 SYMLINK libspdk_iscsi.so 00:02:57.374 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.374 LIB libspdk_env_dpdk_rpc.a 00:02:57.374 CC module/accel/iaa/accel_iaa.o 00:02:57.374 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.374 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.374 CC module/fsdev/aio/fsdev_aio.o 00:02:57.374 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.374 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.374 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.374 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.374 CC module/blob/bdev/blob_bdev.o 00:02:57.374 CC module/accel/error/accel_error.o 00:02:57.374 CC module/accel/error/accel_error_rpc.o 00:02:57.374 CC module/keyring/linux/keyring.o 00:02:57.374 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.632 CC module/accel/ioat/accel_ioat.o 00:02:57.632 SO libspdk_env_dpdk_rpc.so.6.0 00:02:57.632 CC module/keyring/linux/keyring_rpc.o 00:02:57.632 CC module/keyring/file/keyring.o 00:02:57.632 CC module/accel/dsa/accel_dsa.o 00:02:57.632 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.632 CC module/keyring/file/keyring_rpc.o 00:02:57.632 CC module/sock/posix/posix.o 00:02:57.632 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.632 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.633 LIB libspdk_scheduler_gscheduler.a 00:02:57.633 LIB libspdk_accel_iaa.a 00:02:57.633 LIB libspdk_keyring_file.a 00:02:57.633 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.633 LIB libspdk_keyring_linux.a 00:02:57.633 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.633 SO libspdk_keyring_file.so.2.0 00:02:57.633 SO libspdk_accel_iaa.so.3.0 00:02:57.633 LIB libspdk_scheduler_dynamic.a 00:02:57.633 LIB libspdk_accel_error.a 00:02:57.633 LIB libspdk_accel_ioat.a 00:02:57.633 SO libspdk_keyring_linux.so.1.0 00:02:57.633 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.633 SO libspdk_accel_ioat.so.6.0 00:02:57.633 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.633 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.633 SYMLINK libspdk_keyring_file.so 00:02:57.633 SO libspdk_accel_error.so.2.0 00:02:57.891 SYMLINK libspdk_accel_iaa.so 00:02:57.891 LIB libspdk_blob_bdev.a 00:02:57.891 SYMLINK libspdk_keyring_linux.so 00:02:57.891 SYMLINK libspdk_accel_error.so 00:02:57.891 SYMLINK libspdk_accel_ioat.so 00:02:57.891 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.891 LIB libspdk_accel_dsa.a 00:02:57.891 SO libspdk_blob_bdev.so.12.0 00:02:57.891 SO libspdk_accel_dsa.so.5.0 00:02:57.891 SYMLINK libspdk_blob_bdev.so 00:02:57.891 SYMLINK libspdk_accel_dsa.so 00:02:58.150 LIB libspdk_fsdev_aio.a 00:02:58.150 SO libspdk_fsdev_aio.so.1.0 00:02:58.150 LIB libspdk_sock_posix.a 00:02:58.150 SYMLINK libspdk_fsdev_aio.so 00:02:58.407 SO libspdk_sock_posix.so.6.0 00:02:58.407 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.407 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.407 CC module/bdev/nvme/bdev_nvme.o 00:02:58.407 CC module/bdev/delay/vbdev_delay.o 00:02:58.407 CC module/bdev/nvme/nvme_rpc.o 00:02:58.407 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.407 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.407 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.407 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.407 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.407 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.407 CC module/bdev/gpt/gpt.o 00:02:58.407 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.407 CC module/bdev/malloc/bdev_malloc.o 00:02:58.407 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.407 CC module/bdev/nvme/vbdev_opal.o 00:02:58.407 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.407 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.407 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.407 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.407 SYMLINK libspdk_sock_posix.so 00:02:58.407 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.407 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:58.407 CC module/bdev/aio/bdev_aio.o 00:02:58.407 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.407 CC module/bdev/error/vbdev_error.o 00:02:58.407 CC module/bdev/ftl/bdev_ftl.o 00:02:58.407 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.407 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.407 CC module/bdev/split/vbdev_split.o 00:02:58.407 CC module/bdev/error/vbdev_error_rpc.o 00:02:58.407 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.407 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.408 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.408 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.408 CC module/bdev/raid/bdev_raid.o 00:02:58.408 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.408 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.408 CC module/bdev/raid/raid0.o 00:02:58.408 CC module/bdev/raid/concat.o 00:02:58.408 CC module/bdev/raid/raid1.o 00:02:58.408 CC module/bdev/null/bdev_null.o 00:02:58.408 CC module/bdev/null/bdev_null_rpc.o 00:02:58.666 LIB libspdk_blobfs_bdev.a 00:02:58.666 SO libspdk_blobfs_bdev.so.6.0 00:02:58.666 LIB libspdk_bdev_split.a 00:02:58.666 SO libspdk_bdev_split.so.6.0 00:02:58.666 LIB libspdk_bdev_null.a 00:02:58.666 LIB libspdk_bdev_ftl.a 00:02:58.666 SYMLINK libspdk_blobfs_bdev.so 00:02:58.666 LIB libspdk_bdev_error.a 00:02:58.666 LIB libspdk_bdev_gpt.a 00:02:58.666 LIB libspdk_bdev_passthru.a 00:02:58.666 SO libspdk_bdev_null.so.6.0 00:02:58.666 SO libspdk_bdev_error.so.6.0 00:02:58.666 SYMLINK libspdk_bdev_split.so 00:02:58.666 SO libspdk_bdev_ftl.so.6.0 00:02:58.925 SO libspdk_bdev_gpt.so.6.0 00:02:58.925 LIB libspdk_bdev_zone_block.a 00:02:58.925 LIB libspdk_bdev_aio.a 00:02:58.925 SO libspdk_bdev_passthru.so.6.0 00:02:58.925 SYMLINK libspdk_bdev_null.so 00:02:58.925 LIB libspdk_bdev_delay.a 00:02:58.925 LIB libspdk_bdev_malloc.a 00:02:58.925 LIB libspdk_bdev_iscsi.a 00:02:58.925 SO libspdk_bdev_zone_block.so.6.0 00:02:58.925 SO libspdk_bdev_aio.so.6.0 00:02:58.925 SYMLINK libspdk_bdev_gpt.so 00:02:58.925 SYMLINK libspdk_bdev_ftl.so 00:02:58.925 SYMLINK libspdk_bdev_error.so 00:02:58.925 SYMLINK libspdk_bdev_passthru.so 00:02:58.925 SO libspdk_bdev_malloc.so.6.0 00:02:58.925 SO libspdk_bdev_delay.so.6.0 00:02:58.925 SO libspdk_bdev_iscsi.so.6.0 00:02:58.925 SYMLINK libspdk_bdev_aio.so 00:02:58.925 SYMLINK libspdk_bdev_zone_block.so 00:02:58.925 LIB libspdk_bdev_lvol.a 00:02:58.925 SYMLINK libspdk_bdev_malloc.so 00:02:58.925 SYMLINK libspdk_bdev_delay.so 00:02:58.925 SYMLINK libspdk_bdev_iscsi.so 00:02:58.925 LIB libspdk_bdev_virtio.a 00:02:58.925 SO libspdk_bdev_lvol.so.6.0 00:02:58.925 SO libspdk_bdev_virtio.so.6.0 00:02:59.184 SYMLINK libspdk_bdev_lvol.so 00:02:59.184 SYMLINK libspdk_bdev_virtio.so 00:02:59.443 LIB libspdk_bdev_raid.a 00:02:59.443 SO libspdk_bdev_raid.so.6.0 00:02:59.701 SYMLINK libspdk_bdev_raid.so 00:03:01.080 LIB libspdk_bdev_nvme.a 00:03:01.080 SO libspdk_bdev_nvme.so.7.1 00:03:01.080 SYMLINK libspdk_bdev_nvme.so 00:03:01.650 CC module/event/subsystems/vmd/vmd.o 00:03:01.650 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:01.650 CC module/event/subsystems/scheduler/scheduler.o 00:03:01.650 CC module/event/subsystems/keyring/keyring.o 00:03:01.650 CC module/event/subsystems/iobuf/iobuf.o 00:03:01.650 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:01.650 CC module/event/subsystems/sock/sock.o 00:03:01.650 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:01.650 CC module/event/subsystems/fsdev/fsdev.o 00:03:01.912 LIB libspdk_event_scheduler.a 00:03:01.912 LIB libspdk_event_keyring.a 00:03:01.912 LIB libspdk_event_vmd.a 00:03:01.912 LIB libspdk_event_fsdev.a 00:03:01.912 SO libspdk_event_scheduler.so.4.0 00:03:01.912 SO libspdk_event_keyring.so.1.0 00:03:01.912 LIB libspdk_event_sock.a 00:03:01.912 LIB libspdk_event_vhost_blk.a 00:03:01.912 LIB libspdk_event_iobuf.a 00:03:01.912 SO libspdk_event_vmd.so.6.0 00:03:01.912 SO libspdk_event_fsdev.so.1.0 00:03:01.912 SO libspdk_event_sock.so.5.0 00:03:01.912 SO libspdk_event_vhost_blk.so.3.0 00:03:01.912 SO libspdk_event_iobuf.so.3.0 00:03:01.912 SYMLINK libspdk_event_keyring.so 00:03:01.912 SYMLINK libspdk_event_scheduler.so 00:03:01.912 SYMLINK libspdk_event_sock.so 00:03:01.912 SYMLINK libspdk_event_fsdev.so 00:03:01.912 SYMLINK libspdk_event_vmd.so 00:03:01.912 SYMLINK libspdk_event_vhost_blk.so 00:03:01.912 SYMLINK libspdk_event_iobuf.so 00:03:02.184 CC module/event/subsystems/accel/accel.o 00:03:02.499 LIB libspdk_event_accel.a 00:03:02.499 SO libspdk_event_accel.so.6.0 00:03:02.499 SYMLINK libspdk_event_accel.so 00:03:02.813 CC module/event/subsystems/bdev/bdev.o 00:03:03.117 LIB libspdk_event_bdev.a 00:03:03.117 SO libspdk_event_bdev.so.6.0 00:03:03.117 SYMLINK libspdk_event_bdev.so 00:03:03.405 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:03.405 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:03.664 CC module/event/subsystems/scsi/scsi.o 00:03:03.664 CC module/event/subsystems/ublk/ublk.o 00:03:03.664 CC module/event/subsystems/nbd/nbd.o 00:03:03.664 LIB libspdk_event_nbd.a 00:03:03.664 LIB libspdk_event_ublk.a 00:03:03.664 LIB libspdk_event_scsi.a 00:03:03.664 SO libspdk_event_ublk.so.3.0 00:03:03.664 SO libspdk_event_nbd.so.6.0 00:03:03.664 LIB libspdk_event_nvmf.a 00:03:03.664 SO libspdk_event_scsi.so.6.0 00:03:03.664 SO libspdk_event_nvmf.so.6.0 00:03:03.664 SYMLINK libspdk_event_ublk.so 00:03:03.664 SYMLINK libspdk_event_nbd.so 00:03:03.923 SYMLINK libspdk_event_scsi.so 00:03:03.923 SYMLINK libspdk_event_nvmf.so 00:03:04.183 CC module/event/subsystems/iscsi/iscsi.o 00:03:04.183 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:04.183 LIB libspdk_event_iscsi.a 00:03:04.183 LIB libspdk_event_vhost_scsi.a 00:03:04.442 SO libspdk_event_iscsi.so.6.0 00:03:04.442 SO libspdk_event_vhost_scsi.so.3.0 00:03:04.442 SYMLINK libspdk_event_iscsi.so 00:03:04.442 SYMLINK libspdk_event_vhost_scsi.so 00:03:04.702 SO libspdk.so.6.0 00:03:04.702 SYMLINK libspdk.so 00:03:04.962 CC app/spdk_lspci/spdk_lspci.o 00:03:04.962 CC app/spdk_nvme_perf/perf.o 00:03:04.962 CXX app/trace/trace.o 00:03:04.962 CC app/trace_record/trace_record.o 00:03:04.962 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.962 CC app/spdk_top/spdk_top.o 00:03:04.962 CC test/rpc_client/rpc_client_test.o 00:03:04.962 TEST_HEADER include/spdk/accel.h 00:03:04.962 TEST_HEADER include/spdk/assert.h 00:03:04.962 TEST_HEADER include/spdk/accel_module.h 00:03:04.962 TEST_HEADER include/spdk/barrier.h 00:03:04.962 TEST_HEADER include/spdk/bdev.h 00:03:04.962 TEST_HEADER include/spdk/base64.h 00:03:04.962 CC app/spdk_nvme_identify/identify.o 00:03:04.962 TEST_HEADER include/spdk/bdev_module.h 00:03:04.962 TEST_HEADER include/spdk/bit_array.h 00:03:04.962 TEST_HEADER include/spdk/bdev_zone.h 00:03:04.962 CC app/spdk_dd/spdk_dd.o 00:03:04.962 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:04.962 TEST_HEADER include/spdk/bit_pool.h 00:03:04.962 TEST_HEADER include/spdk/blob_bdev.h 00:03:04.962 TEST_HEADER include/spdk/blobfs.h 00:03:04.962 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:04.962 TEST_HEADER include/spdk/blob.h 00:03:04.962 TEST_HEADER include/spdk/conf.h 00:03:04.962 TEST_HEADER include/spdk/config.h 00:03:04.962 TEST_HEADER include/spdk/cpuset.h 00:03:04.962 TEST_HEADER include/spdk/crc32.h 00:03:04.962 TEST_HEADER include/spdk/crc16.h 00:03:04.962 TEST_HEADER include/spdk/dif.h 00:03:04.962 TEST_HEADER include/spdk/endian.h 00:03:04.962 TEST_HEADER include/spdk/crc64.h 00:03:04.962 TEST_HEADER include/spdk/dma.h 00:03:04.962 TEST_HEADER include/spdk/env_dpdk.h 00:03:04.962 TEST_HEADER include/spdk/event.h 00:03:04.962 TEST_HEADER include/spdk/fd.h 00:03:04.962 TEST_HEADER include/spdk/env.h 00:03:04.962 TEST_HEADER include/spdk/fd_group.h 00:03:04.962 TEST_HEADER include/spdk/file.h 00:03:04.962 TEST_HEADER include/spdk/fsdev.h 00:03:04.962 TEST_HEADER include/spdk/fsdev_module.h 00:03:04.962 TEST_HEADER include/spdk/ftl.h 00:03:04.962 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:04.962 TEST_HEADER include/spdk/hexlify.h 00:03:04.962 TEST_HEADER include/spdk/gpt_spec.h 00:03:04.962 CC app/nvmf_tgt/nvmf_main.o 00:03:04.962 TEST_HEADER include/spdk/histogram_data.h 00:03:04.962 TEST_HEADER include/spdk/idxd.h 00:03:04.962 TEST_HEADER include/spdk/ioat.h 00:03:04.962 TEST_HEADER include/spdk/idxd_spec.h 00:03:04.962 TEST_HEADER include/spdk/init.h 00:03:04.962 TEST_HEADER include/spdk/ioat_spec.h 00:03:04.962 TEST_HEADER include/spdk/json.h 00:03:04.962 TEST_HEADER include/spdk/iscsi_spec.h 00:03:04.962 TEST_HEADER include/spdk/keyring.h 00:03:04.962 TEST_HEADER include/spdk/jsonrpc.h 00:03:04.962 TEST_HEADER include/spdk/likely.h 00:03:04.962 TEST_HEADER include/spdk/keyring_module.h 00:03:04.962 TEST_HEADER include/spdk/log.h 00:03:04.962 TEST_HEADER include/spdk/md5.h 00:03:04.962 TEST_HEADER include/spdk/lvol.h 00:03:04.962 TEST_HEADER include/spdk/mmio.h 00:03:04.962 TEST_HEADER include/spdk/memory.h 00:03:04.962 TEST_HEADER include/spdk/net.h 00:03:04.962 TEST_HEADER include/spdk/nbd.h 00:03:04.962 TEST_HEADER include/spdk/notify.h 00:03:04.962 TEST_HEADER include/spdk/nvme.h 00:03:04.962 TEST_HEADER include/spdk/nvme_intel.h 00:03:04.962 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:04.962 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:04.962 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:04.962 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:04.962 TEST_HEADER include/spdk/nvme_zns.h 00:03:04.962 TEST_HEADER include/spdk/nvme_spec.h 00:03:04.962 TEST_HEADER include/spdk/nvmf.h 00:03:04.962 CC app/iscsi_tgt/iscsi_tgt.o 00:03:04.962 TEST_HEADER include/spdk/opal.h 00:03:04.962 TEST_HEADER include/spdk/nvmf_transport.h 00:03:04.962 TEST_HEADER include/spdk/pci_ids.h 00:03:04.962 TEST_HEADER include/spdk/nvmf_spec.h 00:03:04.962 TEST_HEADER include/spdk/pipe.h 00:03:04.962 CC app/spdk_tgt/spdk_tgt.o 00:03:04.962 TEST_HEADER include/spdk/opal_spec.h 00:03:04.962 TEST_HEADER include/spdk/reduce.h 00:03:04.962 TEST_HEADER include/spdk/queue.h 00:03:04.962 TEST_HEADER include/spdk/scheduler.h 00:03:04.962 TEST_HEADER include/spdk/rpc.h 00:03:04.962 TEST_HEADER include/spdk/scsi.h 00:03:04.962 TEST_HEADER include/spdk/scsi_spec.h 00:03:04.962 TEST_HEADER include/spdk/stdinc.h 00:03:04.962 TEST_HEADER include/spdk/string.h 00:03:04.962 TEST_HEADER include/spdk/sock.h 00:03:04.962 TEST_HEADER include/spdk/thread.h 00:03:04.962 TEST_HEADER include/spdk/trace.h 00:03:04.962 TEST_HEADER include/spdk/trace_parser.h 00:03:04.962 TEST_HEADER include/spdk/tree.h 00:03:04.962 TEST_HEADER include/spdk/ublk.h 00:03:04.962 TEST_HEADER include/spdk/util.h 00:03:04.962 TEST_HEADER include/spdk/uuid.h 00:03:04.962 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:04.962 TEST_HEADER include/spdk/version.h 00:03:04.962 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:04.962 TEST_HEADER include/spdk/vhost.h 00:03:04.962 TEST_HEADER include/spdk/xor.h 00:03:04.962 TEST_HEADER include/spdk/vmd.h 00:03:04.962 TEST_HEADER include/spdk/zipf.h 00:03:04.962 CXX test/cpp_headers/accel.o 00:03:04.962 CXX test/cpp_headers/accel_module.o 00:03:04.962 CXX test/cpp_headers/assert.o 00:03:04.962 CXX test/cpp_headers/barrier.o 00:03:04.962 CXX test/cpp_headers/base64.o 00:03:04.962 CXX test/cpp_headers/bdev_zone.o 00:03:04.962 CXX test/cpp_headers/bdev.o 00:03:04.962 CXX test/cpp_headers/bdev_module.o 00:03:04.962 CXX test/cpp_headers/bit_array.o 00:03:04.962 CXX test/cpp_headers/bit_pool.o 00:03:04.962 CXX test/cpp_headers/blobfs.o 00:03:04.962 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.962 CXX test/cpp_headers/blob_bdev.o 00:03:04.962 CXX test/cpp_headers/conf.o 00:03:04.962 CXX test/cpp_headers/blob.o 00:03:04.962 CXX test/cpp_headers/config.o 00:03:04.962 CXX test/cpp_headers/crc32.o 00:03:04.962 CXX test/cpp_headers/crc64.o 00:03:04.962 CXX test/cpp_headers/cpuset.o 00:03:04.962 CXX test/cpp_headers/dma.o 00:03:04.962 CXX test/cpp_headers/crc16.o 00:03:04.962 CXX test/cpp_headers/endian.o 00:03:04.962 CXX test/cpp_headers/env_dpdk.o 00:03:04.962 CXX test/cpp_headers/dif.o 00:03:04.962 CXX test/cpp_headers/fd_group.o 00:03:04.962 CXX test/cpp_headers/env.o 00:03:04.962 CXX test/cpp_headers/fd.o 00:03:04.963 CXX test/cpp_headers/event.o 00:03:04.963 CXX test/cpp_headers/file.o 00:03:05.227 CXX test/cpp_headers/fsdev.o 00:03:05.227 CXX test/cpp_headers/fsdev_module.o 00:03:05.227 CXX test/cpp_headers/ftl.o 00:03:05.227 CXX test/cpp_headers/fuse_dispatcher.o 00:03:05.227 CXX test/cpp_headers/hexlify.o 00:03:05.227 CXX test/cpp_headers/gpt_spec.o 00:03:05.227 CXX test/cpp_headers/histogram_data.o 00:03:05.227 CXX test/cpp_headers/idxd_spec.o 00:03:05.227 CXX test/cpp_headers/idxd.o 00:03:05.227 CXX test/cpp_headers/ioat.o 00:03:05.227 CXX test/cpp_headers/init.o 00:03:05.227 CXX test/cpp_headers/iscsi_spec.o 00:03:05.227 CXX test/cpp_headers/json.o 00:03:05.227 CXX test/cpp_headers/ioat_spec.o 00:03:05.227 CXX test/cpp_headers/keyring.o 00:03:05.227 CXX test/cpp_headers/jsonrpc.o 00:03:05.227 CXX test/cpp_headers/likely.o 00:03:05.227 CXX test/cpp_headers/keyring_module.o 00:03:05.227 CXX test/cpp_headers/md5.o 00:03:05.227 CXX test/cpp_headers/log.o 00:03:05.227 CXX test/cpp_headers/lvol.o 00:03:05.227 CXX test/cpp_headers/memory.o 00:03:05.227 CXX test/cpp_headers/nbd.o 00:03:05.227 CXX test/cpp_headers/mmio.o 00:03:05.227 CXX test/cpp_headers/net.o 00:03:05.227 CXX test/cpp_headers/notify.o 00:03:05.227 CXX test/cpp_headers/nvme.o 00:03:05.227 CXX test/cpp_headers/nvme_intel.o 00:03:05.227 CXX test/cpp_headers/nvme_ocssd.o 00:03:05.227 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:05.227 CXX test/cpp_headers/nvme_spec.o 00:03:05.227 CXX test/cpp_headers/nvme_zns.o 00:03:05.227 CXX test/cpp_headers/nvmf_cmd.o 00:03:05.227 CXX test/cpp_headers/nvmf.o 00:03:05.227 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:05.227 CXX test/cpp_headers/nvmf_spec.o 00:03:05.227 CXX test/cpp_headers/nvmf_transport.o 00:03:05.227 CXX test/cpp_headers/opal.o 00:03:05.227 CXX test/cpp_headers/pci_ids.o 00:03:05.227 CXX test/cpp_headers/opal_spec.o 00:03:05.227 CXX test/cpp_headers/pipe.o 00:03:05.227 CXX test/cpp_headers/queue.o 00:03:05.227 CXX test/cpp_headers/reduce.o 00:03:05.227 CXX test/cpp_headers/rpc.o 00:03:05.227 CXX test/cpp_headers/scheduler.o 00:03:05.227 CXX test/cpp_headers/scsi.o 00:03:05.227 CXX test/cpp_headers/scsi_spec.o 00:03:05.227 CXX test/cpp_headers/sock.o 00:03:05.227 CXX test/cpp_headers/stdinc.o 00:03:05.227 CXX test/cpp_headers/string.o 00:03:05.227 CXX test/cpp_headers/thread.o 00:03:05.227 CXX test/cpp_headers/trace.o 00:03:05.227 CXX test/cpp_headers/trace_parser.o 00:03:05.227 CC examples/ioat/verify/verify.o 00:03:05.227 CXX test/cpp_headers/tree.o 00:03:05.227 CC test/env/pci/pci_ut.o 00:03:05.227 CC examples/util/zipf/zipf.o 00:03:05.227 CC test/app/stub/stub.o 00:03:05.227 CC examples/ioat/perf/perf.o 00:03:05.227 CC test/app/jsoncat/jsoncat.o 00:03:05.227 CC test/app/histogram_perf/histogram_perf.o 00:03:05.227 LINK spdk_lspci 00:03:05.227 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:05.227 CC app/fio/nvme/fio_plugin.o 00:03:05.227 CXX test/cpp_headers/ublk.o 00:03:05.227 CC test/env/vtophys/vtophys.o 00:03:05.227 CC test/thread/poller_perf/poller_perf.o 00:03:05.227 CC test/env/memory/memory_ut.o 00:03:05.227 CC test/app/bdev_svc/bdev_svc.o 00:03:05.227 CC test/dma/test_dma/test_dma.o 00:03:05.227 CC app/fio/bdev/fio_plugin.o 00:03:05.506 LINK interrupt_tgt 00:03:05.506 LINK nvmf_tgt 00:03:05.774 LINK rpc_client_test 00:03:05.774 LINK spdk_nvme_discover 00:03:05.774 CC test/env/mem_callbacks/mem_callbacks.o 00:03:05.774 LINK iscsi_tgt 00:03:05.774 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:05.774 LINK spdk_tgt 00:03:05.774 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:05.774 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:05.774 LINK histogram_perf 00:03:05.774 LINK vtophys 00:03:06.036 CXX test/cpp_headers/util.o 00:03:06.036 CXX test/cpp_headers/uuid.o 00:03:06.036 LINK poller_perf 00:03:06.036 LINK env_dpdk_post_init 00:03:06.036 CXX test/cpp_headers/version.o 00:03:06.036 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:06.036 CXX test/cpp_headers/vfio_user_pci.o 00:03:06.036 CXX test/cpp_headers/vfio_user_spec.o 00:03:06.036 CXX test/cpp_headers/vhost.o 00:03:06.036 CXX test/cpp_headers/vmd.o 00:03:06.036 CXX test/cpp_headers/xor.o 00:03:06.036 CXX test/cpp_headers/zipf.o 00:03:06.036 LINK jsoncat 00:03:06.036 LINK zipf 00:03:06.036 LINK bdev_svc 00:03:06.036 LINK verify 00:03:06.036 LINK stub 00:03:06.036 LINK spdk_trace_record 00:03:06.036 LINK spdk_dd 00:03:06.036 LINK ioat_perf 00:03:06.293 LINK spdk_trace 00:03:06.293 LINK pci_ut 00:03:06.293 LINK spdk_bdev 00:03:06.293 LINK nvme_fuzz 00:03:06.293 LINK test_dma 00:03:06.551 LINK spdk_nvme 00:03:06.551 CC examples/vmd/lsvmd/lsvmd.o 00:03:06.551 CC examples/sock/hello_world/hello_sock.o 00:03:06.551 LINK mem_callbacks 00:03:06.551 CC examples/idxd/perf/perf.o 00:03:06.551 CC examples/vmd/led/led.o 00:03:06.551 CC test/event/reactor/reactor.o 00:03:06.552 CC test/event/reactor_perf/reactor_perf.o 00:03:06.552 CC test/event/app_repeat/app_repeat.o 00:03:06.552 CC test/event/event_perf/event_perf.o 00:03:06.552 LINK vhost_fuzz 00:03:06.552 CC test/event/scheduler/scheduler.o 00:03:06.552 CC examples/thread/thread/thread_ex.o 00:03:06.552 LINK spdk_nvme_perf 00:03:06.552 CC app/vhost/vhost.o 00:03:06.552 LINK lsvmd 00:03:06.552 LINK reactor 00:03:06.552 LINK spdk_top 00:03:06.552 LINK led 00:03:06.552 LINK event_perf 00:03:06.552 LINK spdk_nvme_identify 00:03:06.552 LINK reactor_perf 00:03:06.552 LINK app_repeat 00:03:06.811 LINK hello_sock 00:03:06.811 LINK scheduler 00:03:06.811 LINK thread 00:03:06.811 LINK vhost 00:03:06.811 LINK idxd_perf 00:03:06.811 CC test/nvme/overhead/overhead.o 00:03:06.811 CC test/nvme/startup/startup.o 00:03:06.811 CC test/nvme/boot_partition/boot_partition.o 00:03:06.811 CC test/nvme/fdp/fdp.o 00:03:06.811 CC test/nvme/connect_stress/connect_stress.o 00:03:06.811 CC test/nvme/aer/aer.o 00:03:06.811 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.811 CC test/nvme/cuse/cuse.o 00:03:06.811 CC test/nvme/sgl/sgl.o 00:03:06.811 CC test/nvme/reserve/reserve.o 00:03:06.811 CC test/accel/dif/dif.o 00:03:06.811 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:07.071 CC test/nvme/e2edp/nvme_dp.o 00:03:07.071 CC test/nvme/simple_copy/simple_copy.o 00:03:07.071 CC test/nvme/err_injection/err_injection.o 00:03:07.071 CC test/nvme/reset/reset.o 00:03:07.071 LINK memory_ut 00:03:07.071 CC test/nvme/compliance/nvme_compliance.o 00:03:07.071 CC test/blobfs/mkfs/mkfs.o 00:03:07.071 CC test/lvol/esnap/esnap.o 00:03:07.071 LINK startup 00:03:07.071 LINK boot_partition 00:03:07.071 LINK doorbell_aers 00:03:07.071 LINK connect_stress 00:03:07.071 LINK reserve 00:03:07.071 CC examples/nvme/hello_world/hello_world.o 00:03:07.071 LINK fused_ordering 00:03:07.071 LINK err_injection 00:03:07.071 CC examples/nvme/arbitration/arbitration.o 00:03:07.071 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:07.071 CC examples/nvme/hotplug/hotplug.o 00:03:07.071 CC examples/nvme/reconnect/reconnect.o 00:03:07.071 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:07.071 CC examples/nvme/abort/abort.o 00:03:07.331 LINK mkfs 00:03:07.331 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:07.331 LINK simple_copy 00:03:07.331 LINK reset 00:03:07.331 LINK sgl 00:03:07.331 LINK overhead 00:03:07.331 LINK nvme_dp 00:03:07.331 LINK aer 00:03:07.331 CC examples/accel/perf/accel_perf.o 00:03:07.331 LINK fdp 00:03:07.331 CC examples/blob/cli/blobcli.o 00:03:07.331 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:07.331 LINK nvme_compliance 00:03:07.331 CC examples/blob/hello_world/hello_blob.o 00:03:07.331 LINK pmr_persistence 00:03:07.331 LINK cmb_copy 00:03:07.331 LINK hello_world 00:03:07.331 LINK hotplug 00:03:07.590 LINK arbitration 00:03:07.590 LINK reconnect 00:03:07.590 LINK abort 00:03:07.590 LINK hello_blob 00:03:07.590 LINK hello_fsdev 00:03:07.590 LINK iscsi_fuzz 00:03:07.590 LINK dif 00:03:07.590 LINK nvme_manage 00:03:07.850 LINK blobcli 00:03:07.850 LINK accel_perf 00:03:08.110 LINK cuse 00:03:08.110 CC test/bdev/bdevio/bdevio.o 00:03:08.370 CC examples/bdev/hello_world/hello_bdev.o 00:03:08.370 CC examples/bdev/bdevperf/bdevperf.o 00:03:08.629 LINK bdevio 00:03:08.629 LINK hello_bdev 00:03:09.198 LINK bdevperf 00:03:09.767 CC examples/nvmf/nvmf/nvmf.o 00:03:10.026 LINK nvmf 00:03:11.933 LINK esnap 00:03:12.192 00:03:12.192 real 0m59.621s 00:03:12.192 user 8m18.162s 00:03:12.192 sys 4m13.059s 00:03:12.192 05:20:08 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:12.192 05:20:08 make -- common/autotest_common.sh@10 -- $ set +x 00:03:12.192 ************************************ 00:03:12.192 END TEST make 00:03:12.192 ************************************ 00:03:12.192 05:20:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:12.192 05:20:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:12.192 05:20:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:12.192 05:20:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.192 05:20:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:12.192 05:20:08 -- pm/common@44 -- $ pid=3042013 00:03:12.192 05:20:08 -- pm/common@50 -- $ kill -TERM 3042013 00:03:12.192 05:20:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.192 05:20:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:12.192 05:20:08 -- pm/common@44 -- $ pid=3042015 00:03:12.192 05:20:08 -- pm/common@50 -- $ kill -TERM 3042015 00:03:12.193 05:20:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.193 05:20:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:12.193 05:20:08 -- pm/common@44 -- $ pid=3042017 00:03:12.193 05:20:08 -- pm/common@50 -- $ kill -TERM 3042017 00:03:12.193 05:20:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.193 05:20:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:12.193 05:20:08 -- pm/common@44 -- $ pid=3042042 00:03:12.193 05:20:08 -- pm/common@50 -- $ sudo -E kill -TERM 3042042 00:03:12.193 05:20:08 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:12.193 05:20:08 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:12.453 05:20:08 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:12.453 05:20:08 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:12.453 05:20:08 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:12.453 05:20:08 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:12.453 05:20:08 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:12.453 05:20:08 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:12.453 05:20:08 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:12.453 05:20:08 -- scripts/common.sh@336 -- # IFS=.-: 00:03:12.453 05:20:08 -- scripts/common.sh@336 -- # read -ra ver1 00:03:12.453 05:20:08 -- scripts/common.sh@337 -- # IFS=.-: 00:03:12.453 05:20:08 -- scripts/common.sh@337 -- # read -ra ver2 00:03:12.453 05:20:08 -- scripts/common.sh@338 -- # local 'op=<' 00:03:12.453 05:20:08 -- scripts/common.sh@340 -- # ver1_l=2 00:03:12.453 05:20:08 -- scripts/common.sh@341 -- # ver2_l=1 00:03:12.453 05:20:08 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:12.453 05:20:08 -- scripts/common.sh@344 -- # case "$op" in 00:03:12.453 05:20:08 -- scripts/common.sh@345 -- # : 1 00:03:12.453 05:20:08 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:12.453 05:20:08 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:12.453 05:20:08 -- scripts/common.sh@365 -- # decimal 1 00:03:12.453 05:20:08 -- scripts/common.sh@353 -- # local d=1 00:03:12.453 05:20:08 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:12.453 05:20:08 -- scripts/common.sh@355 -- # echo 1 00:03:12.453 05:20:08 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:12.453 05:20:08 -- scripts/common.sh@366 -- # decimal 2 00:03:12.453 05:20:08 -- scripts/common.sh@353 -- # local d=2 00:03:12.453 05:20:08 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:12.453 05:20:08 -- scripts/common.sh@355 -- # echo 2 00:03:12.453 05:20:08 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:12.453 05:20:08 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:12.453 05:20:08 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:12.453 05:20:08 -- scripts/common.sh@368 -- # return 0 00:03:12.453 05:20:08 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:12.453 05:20:08 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.453 --rc genhtml_branch_coverage=1 00:03:12.453 --rc genhtml_function_coverage=1 00:03:12.453 --rc genhtml_legend=1 00:03:12.453 --rc geninfo_all_blocks=1 00:03:12.453 --rc geninfo_unexecuted_blocks=1 00:03:12.453 00:03:12.453 ' 00:03:12.453 05:20:08 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.453 --rc genhtml_branch_coverage=1 00:03:12.453 --rc genhtml_function_coverage=1 00:03:12.453 --rc genhtml_legend=1 00:03:12.453 --rc geninfo_all_blocks=1 00:03:12.453 --rc geninfo_unexecuted_blocks=1 00:03:12.453 00:03:12.453 ' 00:03:12.453 05:20:08 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.453 --rc genhtml_branch_coverage=1 00:03:12.453 --rc genhtml_function_coverage=1 00:03:12.453 --rc genhtml_legend=1 00:03:12.453 --rc geninfo_all_blocks=1 00:03:12.453 --rc geninfo_unexecuted_blocks=1 00:03:12.453 00:03:12.453 ' 00:03:12.453 05:20:08 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.453 --rc genhtml_branch_coverage=1 00:03:12.453 --rc genhtml_function_coverage=1 00:03:12.453 --rc genhtml_legend=1 00:03:12.453 --rc geninfo_all_blocks=1 00:03:12.453 --rc geninfo_unexecuted_blocks=1 00:03:12.453 00:03:12.453 ' 00:03:12.453 05:20:08 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:12.453 05:20:08 -- nvmf/common.sh@7 -- # uname -s 00:03:12.453 05:20:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:12.453 05:20:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:12.453 05:20:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:12.453 05:20:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:12.453 05:20:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:12.453 05:20:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:12.453 05:20:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:12.453 05:20:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:12.453 05:20:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:12.453 05:20:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:12.453 05:20:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:12.453 05:20:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:12.453 05:20:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:12.453 05:20:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:12.453 05:20:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:12.453 05:20:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:12.453 05:20:08 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:12.453 05:20:08 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:12.453 05:20:08 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:12.453 05:20:08 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:12.453 05:20:08 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:12.453 05:20:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.453 05:20:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.453 05:20:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.453 05:20:08 -- paths/export.sh@5 -- # export PATH 00:03:12.453 05:20:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.453 05:20:08 -- nvmf/common.sh@51 -- # : 0 00:03:12.453 05:20:08 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:12.453 05:20:08 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:12.453 05:20:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:12.453 05:20:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:12.453 05:20:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:12.453 05:20:08 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:12.454 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:12.454 05:20:08 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:12.454 05:20:08 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:12.454 05:20:08 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:12.454 05:20:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:12.454 05:20:08 -- spdk/autotest.sh@32 -- # uname -s 00:03:12.454 05:20:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:12.454 05:20:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:12.454 05:20:08 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:12.454 05:20:08 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:12.454 05:20:08 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:12.454 05:20:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:12.454 05:20:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:12.454 05:20:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:12.454 05:20:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:12.454 05:20:08 -- spdk/autotest.sh@48 -- # udevadm_pid=3105939 00:03:12.454 05:20:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:12.454 05:20:08 -- pm/common@17 -- # local monitor 00:03:12.454 05:20:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.454 05:20:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.454 05:20:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.454 05:20:08 -- pm/common@21 -- # date +%s 00:03:12.454 05:20:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.454 05:20:08 -- pm/common@21 -- # date +%s 00:03:12.454 05:20:08 -- pm/common@21 -- # date +%s 00:03:12.454 05:20:08 -- pm/common@25 -- # sleep 1 00:03:12.454 05:20:08 -- pm/common@21 -- # date +%s 00:03:12.454 05:20:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732681208 00:03:12.454 05:20:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732681208 00:03:12.454 05:20:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732681208 00:03:12.454 05:20:08 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732681208 00:03:12.454 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732681208_collect-cpu-temp.pm.log 00:03:12.454 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732681208_collect-cpu-load.pm.log 00:03:12.454 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732681208_collect-vmstat.pm.log 00:03:12.454 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732681208_collect-bmc-pm.bmc.pm.log 00:03:13.389 05:20:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:13.389 05:20:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:13.389 05:20:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:13.389 05:20:09 -- common/autotest_common.sh@10 -- # set +x 00:03:13.389 05:20:09 -- spdk/autotest.sh@59 -- # create_test_list 00:03:13.389 05:20:09 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:13.389 05:20:09 -- common/autotest_common.sh@10 -- # set +x 00:03:13.647 05:20:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:13.647 05:20:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:13.647 05:20:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:13.647 05:20:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:13.647 05:20:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:13.647 05:20:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:13.647 05:20:10 -- common/autotest_common.sh@1457 -- # uname 00:03:13.647 05:20:10 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:13.647 05:20:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:13.647 05:20:10 -- common/autotest_common.sh@1477 -- # uname 00:03:13.647 05:20:10 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:13.647 05:20:10 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:13.647 05:20:10 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:13.647 lcov: LCOV version 1.15 00:03:13.647 05:20:10 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:35.635 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:35.635 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:38.173 05:20:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:38.173 05:20:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.173 05:20:34 -- common/autotest_common.sh@10 -- # set +x 00:03:38.173 05:20:34 -- spdk/autotest.sh@78 -- # rm -f 00:03:38.173 05:20:34 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.366 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:42.366 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:42.366 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:42.366 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:42.366 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:42.366 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:42.366 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:42.366 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:42.625 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:42.625 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:42.625 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:42.625 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:42.625 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:42.625 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:42.625 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:42.625 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:42.625 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:42.899 05:20:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:42.899 05:20:39 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:42.899 05:20:39 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:42.899 05:20:39 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:42.899 05:20:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.899 05:20:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:42.899 05:20:39 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:42.899 05:20:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.899 05:20:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.899 05:20:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:42.899 05:20:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.899 05:20:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.899 05:20:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:42.899 05:20:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:42.899 05:20:39 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:42.899 No valid GPT data, bailing 00:03:42.899 05:20:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.899 05:20:39 -- scripts/common.sh@394 -- # pt= 00:03:42.899 05:20:39 -- scripts/common.sh@395 -- # return 1 00:03:42.899 05:20:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:42.899 1+0 records in 00:03:42.899 1+0 records out 00:03:42.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501571 s, 209 MB/s 00:03:42.899 05:20:39 -- spdk/autotest.sh@105 -- # sync 00:03:42.899 05:20:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.899 05:20:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.899 05:20:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:51.025 05:20:46 -- spdk/autotest.sh@111 -- # uname -s 00:03:51.025 05:20:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:51.025 05:20:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:51.025 05:20:46 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:54.311 Hugepages 00:03:54.311 node hugesize free / total 00:03:54.311 node0 1048576kB 0 / 0 00:03:54.311 node0 2048kB 0 / 0 00:03:54.311 node1 1048576kB 0 / 0 00:03:54.311 node1 2048kB 0 / 0 00:03:54.311 00:03:54.311 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:54.312 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:54.312 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:54.312 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:54.312 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:54.312 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:54.312 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:54.312 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:54.312 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:54.312 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:54.312 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:54.312 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:54.312 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:54.312 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:54.312 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:54.312 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:54.312 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:54.312 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:54.312 05:20:50 -- spdk/autotest.sh@117 -- # uname -s 00:03:54.312 05:20:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:54.312 05:20:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:54.312 05:20:50 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:58.504 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:58.504 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:58.763 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:58.763 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:58.763 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:58.763 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:00.671 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.671 05:20:57 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:02.050 05:20:58 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:02.050 05:20:58 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:02.050 05:20:58 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.050 05:20:58 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:02.050 05:20:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:02.050 05:20:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:02.050 05:20:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.050 05:20:58 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.050 05:20:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:02.050 05:20:58 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:02.050 05:20:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:02.050 05:20:58 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.241 Waiting for block devices as requested 00:04:06.241 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:06.241 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:06.241 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:06.241 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:06.241 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:06.241 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:06.241 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:06.241 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:06.241 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:06.501 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:06.501 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:06.501 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:06.759 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:06.759 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:06.759 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:07.017 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:07.017 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:07.275 05:21:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.275 05:21:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:07.275 05:21:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:07.275 05:21:03 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:04:07.275 05:21:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:07.275 05:21:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:07.275 05:21:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:07.275 05:21:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:07.275 05:21:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:07.275 05:21:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:07.275 05:21:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:07.275 05:21:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.275 05:21:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.275 05:21:03 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:07.275 05:21:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.275 05:21:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.275 05:21:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:07.275 05:21:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.275 05:21:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.275 05:21:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.275 05:21:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.275 05:21:03 -- common/autotest_common.sh@1543 -- # continue 00:04:07.275 05:21:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:07.275 05:21:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.275 05:21:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.275 05:21:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:07.275 05:21:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.275 05:21:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.275 05:21:03 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:11.464 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:11.464 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.369 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:13.369 05:21:09 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:13.369 05:21:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.369 05:21:09 -- common/autotest_common.sh@10 -- # set +x 00:04:13.369 05:21:09 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:13.369 05:21:09 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:13.369 05:21:09 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:13.369 05:21:09 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:13.369 05:21:09 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:13.369 05:21:09 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:13.369 05:21:09 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:13.369 05:21:09 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:13.369 05:21:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:13.369 05:21:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:13.369 05:21:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.369 05:21:09 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:13.369 05:21:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:13.369 05:21:09 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:13.369 05:21:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:13.369 05:21:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:13.369 05:21:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:13.369 05:21:09 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:13.369 05:21:09 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:13.369 05:21:09 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:13.369 05:21:09 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:13.369 05:21:09 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:04:13.369 05:21:09 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:04:13.369 05:21:09 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3123574 00:04:13.369 05:21:09 -- common/autotest_common.sh@1585 -- # waitforlisten 3123574 00:04:13.369 05:21:09 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.369 05:21:09 -- common/autotest_common.sh@835 -- # '[' -z 3123574 ']' 00:04:13.369 05:21:09 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.369 05:21:09 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.369 05:21:09 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.369 05:21:09 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.369 05:21:09 -- common/autotest_common.sh@10 -- # set +x 00:04:13.629 [2024-11-27 05:21:09.974909] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:13.629 [2024-11-27 05:21:09.975006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3123574 ] 00:04:13.629 [2024-11-27 05:21:10.133120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.888 [2024-11-27 05:21:10.233976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.457 05:21:10 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.457 05:21:10 -- common/autotest_common.sh@868 -- # return 0 00:04:14.457 05:21:10 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:14.457 05:21:10 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:14.457 05:21:10 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:17.750 nvme0n1 00:04:17.750 05:21:14 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:17.750 [2024-11-27 05:21:14.208258] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:17.750 request: 00:04:17.750 { 00:04:17.750 "nvme_ctrlr_name": "nvme0", 00:04:17.750 "password": "test", 00:04:17.750 "method": "bdev_nvme_opal_revert", 00:04:17.750 "req_id": 1 00:04:17.750 } 00:04:17.750 Got JSON-RPC error response 00:04:17.750 response: 00:04:17.750 { 00:04:17.750 "code": -32602, 00:04:17.750 "message": "Invalid parameters" 00:04:17.750 } 00:04:17.750 05:21:14 -- common/autotest_common.sh@1591 -- # true 00:04:17.750 05:21:14 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:17.750 05:21:14 -- common/autotest_common.sh@1595 -- # killprocess 3123574 00:04:17.750 05:21:14 -- common/autotest_common.sh@954 -- # '[' -z 3123574 ']' 00:04:17.750 05:21:14 -- common/autotest_common.sh@958 -- # kill -0 3123574 00:04:17.750 05:21:14 -- common/autotest_common.sh@959 -- # uname 00:04:17.750 05:21:14 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.750 05:21:14 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3123574 00:04:17.750 05:21:14 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.750 05:21:14 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.750 05:21:14 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3123574' 00:04:17.750 killing process with pid 3123574 00:04:17.750 05:21:14 -- common/autotest_common.sh@973 -- # kill 3123574 00:04:17.750 05:21:14 -- common/autotest_common.sh@978 -- # wait 3123574 00:04:23.026 05:21:18 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:23.026 05:21:18 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:23.026 05:21:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:23.026 05:21:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:23.026 05:21:18 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:23.026 05:21:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.026 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:23.026 05:21:18 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:23.026 05:21:18 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:23.026 05:21:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.026 05:21:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.026 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:23.027 ************************************ 00:04:23.027 START TEST env 00:04:23.027 ************************************ 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:23.027 * Looking for test storage... 00:04:23.027 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.027 05:21:18 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.027 05:21:18 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.027 05:21:18 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.027 05:21:18 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.027 05:21:18 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.027 05:21:18 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.027 05:21:18 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.027 05:21:18 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.027 05:21:18 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.027 05:21:18 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.027 05:21:18 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.027 05:21:18 env -- scripts/common.sh@344 -- # case "$op" in 00:04:23.027 05:21:18 env -- scripts/common.sh@345 -- # : 1 00:04:23.027 05:21:18 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.027 05:21:18 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.027 05:21:18 env -- scripts/common.sh@365 -- # decimal 1 00:04:23.027 05:21:18 env -- scripts/common.sh@353 -- # local d=1 00:04:23.027 05:21:18 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.027 05:21:18 env -- scripts/common.sh@355 -- # echo 1 00:04:23.027 05:21:18 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.027 05:21:18 env -- scripts/common.sh@366 -- # decimal 2 00:04:23.027 05:21:18 env -- scripts/common.sh@353 -- # local d=2 00:04:23.027 05:21:18 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.027 05:21:18 env -- scripts/common.sh@355 -- # echo 2 00:04:23.027 05:21:18 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.027 05:21:18 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.027 05:21:18 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.027 05:21:18 env -- scripts/common.sh@368 -- # return 0 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.027 --rc genhtml_branch_coverage=1 00:04:23.027 --rc genhtml_function_coverage=1 00:04:23.027 --rc genhtml_legend=1 00:04:23.027 --rc geninfo_all_blocks=1 00:04:23.027 --rc geninfo_unexecuted_blocks=1 00:04:23.027 00:04:23.027 ' 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.027 --rc genhtml_branch_coverage=1 00:04:23.027 --rc genhtml_function_coverage=1 00:04:23.027 --rc genhtml_legend=1 00:04:23.027 --rc geninfo_all_blocks=1 00:04:23.027 --rc geninfo_unexecuted_blocks=1 00:04:23.027 00:04:23.027 ' 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.027 --rc genhtml_branch_coverage=1 00:04:23.027 --rc genhtml_function_coverage=1 00:04:23.027 --rc genhtml_legend=1 00:04:23.027 --rc geninfo_all_blocks=1 00:04:23.027 --rc geninfo_unexecuted_blocks=1 00:04:23.027 00:04:23.027 ' 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.027 --rc genhtml_branch_coverage=1 00:04:23.027 --rc genhtml_function_coverage=1 00:04:23.027 --rc genhtml_legend=1 00:04:23.027 --rc geninfo_all_blocks=1 00:04:23.027 --rc geninfo_unexecuted_blocks=1 00:04:23.027 00:04:23.027 ' 00:04:23.027 05:21:18 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.027 05:21:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.027 05:21:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.027 ************************************ 00:04:23.027 START TEST env_memory 00:04:23.027 ************************************ 00:04:23.027 05:21:18 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:23.027 00:04:23.027 00:04:23.027 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.027 http://cunit.sourceforge.net/ 00:04:23.027 00:04:23.027 00:04:23.027 Suite: memory 00:04:23.027 Test: alloc and free memory map ...[2024-11-27 05:21:18.962769] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.027 passed 00:04:23.027 Test: mem map translation ...[2024-11-27 05:21:18.997027] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.027 [2024-11-27 05:21:18.997057] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.027 [2024-11-27 05:21:18.997111] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.027 [2024-11-27 05:21:18.997124] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.027 passed 00:04:23.027 Test: mem map registration ...[2024-11-27 05:21:19.050941] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:23.027 [2024-11-27 05:21:19.050967] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:23.027 passed 00:04:23.027 Test: mem map adjacent registrations ...passed 00:04:23.027 00:04:23.027 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.027 suites 1 1 n/a 0 0 00:04:23.027 tests 4 4 4 0 0 00:04:23.027 asserts 152 152 152 0 n/a 00:04:23.027 00:04:23.027 Elapsed time = 0.196 seconds 00:04:23.027 00:04:23.027 real 0m0.236s 00:04:23.027 user 0m0.208s 00:04:23.027 sys 0m0.027s 00:04:23.027 05:21:19 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.027 05:21:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.027 ************************************ 00:04:23.027 END TEST env_memory 00:04:23.027 ************************************ 00:04:23.028 05:21:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:23.028 05:21:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.028 05:21:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.028 05:21:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.028 ************************************ 00:04:23.028 START TEST env_vtophys 00:04:23.028 ************************************ 00:04:23.028 05:21:19 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:23.028 EAL: lib.eal log level changed from notice to debug 00:04:23.028 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.028 EAL: Detected lcore 1 as core 1 on socket 0 00:04:23.028 EAL: Detected lcore 2 as core 2 on socket 0 00:04:23.028 EAL: Detected lcore 3 as core 3 on socket 0 00:04:23.028 EAL: Detected lcore 4 as core 4 on socket 0 00:04:23.028 EAL: Detected lcore 5 as core 5 on socket 0 00:04:23.028 EAL: Detected lcore 6 as core 6 on socket 0 00:04:23.028 EAL: Detected lcore 7 as core 8 on socket 0 00:04:23.028 EAL: Detected lcore 8 as core 9 on socket 0 00:04:23.028 EAL: Detected lcore 9 as core 10 on socket 0 00:04:23.028 EAL: Detected lcore 10 as core 11 on socket 0 00:04:23.028 EAL: Detected lcore 11 as core 12 on socket 0 00:04:23.028 EAL: Detected lcore 12 as core 13 on socket 0 00:04:23.028 EAL: Detected lcore 13 as core 14 on socket 0 00:04:23.028 EAL: Detected lcore 14 as core 16 on socket 0 00:04:23.028 EAL: Detected lcore 15 as core 17 on socket 0 00:04:23.028 EAL: Detected lcore 16 as core 18 on socket 0 00:04:23.028 EAL: Detected lcore 17 as core 19 on socket 0 00:04:23.028 EAL: Detected lcore 18 as core 20 on socket 0 00:04:23.028 EAL: Detected lcore 19 as core 21 on socket 0 00:04:23.028 EAL: Detected lcore 20 as core 22 on socket 0 00:04:23.028 EAL: Detected lcore 21 as core 24 on socket 0 00:04:23.028 EAL: Detected lcore 22 as core 25 on socket 0 00:04:23.028 EAL: Detected lcore 23 as core 26 on socket 0 00:04:23.028 EAL: Detected lcore 24 as core 27 on socket 0 00:04:23.028 EAL: Detected lcore 25 as core 28 on socket 0 00:04:23.028 EAL: Detected lcore 26 as core 29 on socket 0 00:04:23.028 EAL: Detected lcore 27 as core 30 on socket 0 00:04:23.028 EAL: Detected lcore 28 as core 0 on socket 1 00:04:23.028 EAL: Detected lcore 29 as core 1 on socket 1 00:04:23.028 EAL: Detected lcore 30 as core 2 on socket 1 00:04:23.028 EAL: Detected lcore 31 as core 3 on socket 1 00:04:23.028 EAL: Detected lcore 32 as core 4 on socket 1 00:04:23.028 EAL: Detected lcore 33 as core 5 on socket 1 00:04:23.028 EAL: Detected lcore 34 as core 6 on socket 1 00:04:23.028 EAL: Detected lcore 35 as core 8 on socket 1 00:04:23.028 EAL: Detected lcore 36 as core 9 on socket 1 00:04:23.028 EAL: Detected lcore 37 as core 10 on socket 1 00:04:23.028 EAL: Detected lcore 38 as core 11 on socket 1 00:04:23.028 EAL: Detected lcore 39 as core 12 on socket 1 00:04:23.028 EAL: Detected lcore 40 as core 13 on socket 1 00:04:23.028 EAL: Detected lcore 41 as core 14 on socket 1 00:04:23.028 EAL: Detected lcore 42 as core 16 on socket 1 00:04:23.028 EAL: Detected lcore 43 as core 17 on socket 1 00:04:23.028 EAL: Detected lcore 44 as core 18 on socket 1 00:04:23.028 EAL: Detected lcore 45 as core 19 on socket 1 00:04:23.028 EAL: Detected lcore 46 as core 20 on socket 1 00:04:23.028 EAL: Detected lcore 47 as core 21 on socket 1 00:04:23.028 EAL: Detected lcore 48 as core 22 on socket 1 00:04:23.028 EAL: Detected lcore 49 as core 24 on socket 1 00:04:23.028 EAL: Detected lcore 50 as core 25 on socket 1 00:04:23.028 EAL: Detected lcore 51 as core 26 on socket 1 00:04:23.028 EAL: Detected lcore 52 as core 27 on socket 1 00:04:23.028 EAL: Detected lcore 53 as core 28 on socket 1 00:04:23.028 EAL: Detected lcore 54 as core 29 on socket 1 00:04:23.028 EAL: Detected lcore 55 as core 30 on socket 1 00:04:23.028 EAL: Detected lcore 56 as core 0 on socket 0 00:04:23.028 EAL: Detected lcore 57 as core 1 on socket 0 00:04:23.028 EAL: Detected lcore 58 as core 2 on socket 0 00:04:23.028 EAL: Detected lcore 59 as core 3 on socket 0 00:04:23.028 EAL: Detected lcore 60 as core 4 on socket 0 00:04:23.028 EAL: Detected lcore 61 as core 5 on socket 0 00:04:23.028 EAL: Detected lcore 62 as core 6 on socket 0 00:04:23.028 EAL: Detected lcore 63 as core 8 on socket 0 00:04:23.028 EAL: Detected lcore 64 as core 9 on socket 0 00:04:23.028 EAL: Detected lcore 65 as core 10 on socket 0 00:04:23.028 EAL: Detected lcore 66 as core 11 on socket 0 00:04:23.028 EAL: Detected lcore 67 as core 12 on socket 0 00:04:23.028 EAL: Detected lcore 68 as core 13 on socket 0 00:04:23.028 EAL: Detected lcore 69 as core 14 on socket 0 00:04:23.028 EAL: Detected lcore 70 as core 16 on socket 0 00:04:23.028 EAL: Detected lcore 71 as core 17 on socket 0 00:04:23.028 EAL: Detected lcore 72 as core 18 on socket 0 00:04:23.028 EAL: Detected lcore 73 as core 19 on socket 0 00:04:23.028 EAL: Detected lcore 74 as core 20 on socket 0 00:04:23.028 EAL: Detected lcore 75 as core 21 on socket 0 00:04:23.028 EAL: Detected lcore 76 as core 22 on socket 0 00:04:23.028 EAL: Detected lcore 77 as core 24 on socket 0 00:04:23.028 EAL: Detected lcore 78 as core 25 on socket 0 00:04:23.028 EAL: Detected lcore 79 as core 26 on socket 0 00:04:23.028 EAL: Detected lcore 80 as core 27 on socket 0 00:04:23.028 EAL: Detected lcore 81 as core 28 on socket 0 00:04:23.028 EAL: Detected lcore 82 as core 29 on socket 0 00:04:23.028 EAL: Detected lcore 83 as core 30 on socket 0 00:04:23.028 EAL: Detected lcore 84 as core 0 on socket 1 00:04:23.028 EAL: Detected lcore 85 as core 1 on socket 1 00:04:23.028 EAL: Detected lcore 86 as core 2 on socket 1 00:04:23.028 EAL: Detected lcore 87 as core 3 on socket 1 00:04:23.028 EAL: Detected lcore 88 as core 4 on socket 1 00:04:23.028 EAL: Detected lcore 89 as core 5 on socket 1 00:04:23.028 EAL: Detected lcore 90 as core 6 on socket 1 00:04:23.028 EAL: Detected lcore 91 as core 8 on socket 1 00:04:23.028 EAL: Detected lcore 92 as core 9 on socket 1 00:04:23.028 EAL: Detected lcore 93 as core 10 on socket 1 00:04:23.028 EAL: Detected lcore 94 as core 11 on socket 1 00:04:23.028 EAL: Detected lcore 95 as core 12 on socket 1 00:04:23.028 EAL: Detected lcore 96 as core 13 on socket 1 00:04:23.028 EAL: Detected lcore 97 as core 14 on socket 1 00:04:23.028 EAL: Detected lcore 98 as core 16 on socket 1 00:04:23.028 EAL: Detected lcore 99 as core 17 on socket 1 00:04:23.028 EAL: Detected lcore 100 as core 18 on socket 1 00:04:23.028 EAL: Detected lcore 101 as core 19 on socket 1 00:04:23.028 EAL: Detected lcore 102 as core 20 on socket 1 00:04:23.028 EAL: Detected lcore 103 as core 21 on socket 1 00:04:23.028 EAL: Detected lcore 104 as core 22 on socket 1 00:04:23.028 EAL: Detected lcore 105 as core 24 on socket 1 00:04:23.028 EAL: Detected lcore 106 as core 25 on socket 1 00:04:23.028 EAL: Detected lcore 107 as core 26 on socket 1 00:04:23.028 EAL: Detected lcore 108 as core 27 on socket 1 00:04:23.028 EAL: Detected lcore 109 as core 28 on socket 1 00:04:23.028 EAL: Detected lcore 110 as core 29 on socket 1 00:04:23.028 EAL: Detected lcore 111 as core 30 on socket 1 00:04:23.028 EAL: Maximum logical cores by configuration: 128 00:04:23.028 EAL: Detected CPU lcores: 112 00:04:23.028 EAL: Detected NUMA nodes: 2 00:04:23.028 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.028 EAL: Detected shared linkage of DPDK 00:04:23.028 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.028 EAL: Bus pci wants IOVA as 'DC' 00:04:23.028 EAL: Buses did not request a specific IOVA mode. 00:04:23.028 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:23.028 EAL: Selected IOVA mode 'VA' 00:04:23.028 EAL: Probing VFIO support... 00:04:23.028 EAL: IOMMU type 1 (Type 1) is supported 00:04:23.028 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:23.028 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:23.028 EAL: VFIO support initialized 00:04:23.028 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.028 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.028 EAL: Setting up physically contiguous memory... 00:04:23.028 EAL: Setting maximum number of open files to 524288 00:04:23.028 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.028 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:23.029 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.029 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.029 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.029 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.029 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.029 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.029 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.029 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.029 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.029 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.029 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.029 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.029 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.029 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.029 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.029 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.029 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.029 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.029 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.029 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.029 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.029 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.029 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.029 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.029 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.029 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:23.029 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.029 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:23.029 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.029 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.029 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:23.029 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:23.029 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.029 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:23.029 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.029 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.029 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:23.029 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:23.029 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.029 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:23.029 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.029 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.029 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:23.029 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:23.029 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.029 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:23.029 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.029 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.029 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:23.029 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:23.029 EAL: Hugepages will be freed exactly as allocated. 00:04:23.029 EAL: No shared files mode enabled, IPC is disabled 00:04:23.029 EAL: No shared files mode enabled, IPC is disabled 00:04:23.029 EAL: TSC frequency is ~2500000 KHz 00:04:23.029 EAL: Main lcore 0 is ready (tid=7f57950b4a40;cpuset=[0]) 00:04:23.029 EAL: Trying to obtain current memory policy. 00:04:23.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.029 EAL: Restoring previous memory policy: 0 00:04:23.029 EAL: request: mp_malloc_sync 00:04:23.029 EAL: No shared files mode enabled, IPC is disabled 00:04:23.029 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.029 EAL: No shared files mode enabled, IPC is disabled 00:04:23.029 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.029 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.029 00:04:23.029 00:04:23.029 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.029 http://cunit.sourceforge.net/ 00:04:23.029 00:04:23.029 00:04:23.029 Suite: components_suite 00:04:23.288 Test: vtophys_malloc_test ...passed 00:04:23.288 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:23.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.288 EAL: Restoring previous memory policy: 4 00:04:23.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.288 EAL: request: mp_malloc_sync 00:04:23.288 EAL: No shared files mode enabled, IPC is disabled 00:04:23.288 EAL: Heap on socket 0 was expanded by 4MB 00:04:23.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.288 EAL: request: mp_malloc_sync 00:04:23.288 EAL: No shared files mode enabled, IPC is disabled 00:04:23.288 EAL: Heap on socket 0 was shrunk by 4MB 00:04:23.288 EAL: Trying to obtain current memory policy. 00:04:23.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.288 EAL: Restoring previous memory policy: 4 00:04:23.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.288 EAL: request: mp_malloc_sync 00:04:23.288 EAL: No shared files mode enabled, IPC is disabled 00:04:23.288 EAL: Heap on socket 0 was expanded by 6MB 00:04:23.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.288 EAL: request: mp_malloc_sync 00:04:23.288 EAL: No shared files mode enabled, IPC is disabled 00:04:23.288 EAL: Heap on socket 0 was shrunk by 6MB 00:04:23.288 EAL: Trying to obtain current memory policy. 00:04:23.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.288 EAL: Restoring previous memory policy: 4 00:04:23.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.288 EAL: request: mp_malloc_sync 00:04:23.288 EAL: No shared files mode enabled, IPC is disabled 00:04:23.288 EAL: Heap on socket 0 was expanded by 10MB 00:04:23.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.288 EAL: request: mp_malloc_sync 00:04:23.288 EAL: No shared files mode enabled, IPC is disabled 00:04:23.288 EAL: Heap on socket 0 was shrunk by 10MB 00:04:23.288 EAL: Trying to obtain current memory policy. 00:04:23.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.289 EAL: Restoring previous memory policy: 4 00:04:23.289 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.289 EAL: request: mp_malloc_sync 00:04:23.289 EAL: No shared files mode enabled, IPC is disabled 00:04:23.289 EAL: Heap on socket 0 was expanded by 18MB 00:04:23.289 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.289 EAL: request: mp_malloc_sync 00:04:23.289 EAL: No shared files mode enabled, IPC is disabled 00:04:23.289 EAL: Heap on socket 0 was shrunk by 18MB 00:04:23.547 EAL: Trying to obtain current memory policy. 00:04:23.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.547 EAL: Restoring previous memory policy: 4 00:04:23.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.547 EAL: request: mp_malloc_sync 00:04:23.547 EAL: No shared files mode enabled, IPC is disabled 00:04:23.547 EAL: Heap on socket 0 was expanded by 34MB 00:04:23.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.547 EAL: request: mp_malloc_sync 00:04:23.547 EAL: No shared files mode enabled, IPC is disabled 00:04:23.547 EAL: Heap on socket 0 was shrunk by 34MB 00:04:23.547 EAL: Trying to obtain current memory policy. 00:04:23.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.547 EAL: Restoring previous memory policy: 4 00:04:23.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.547 EAL: request: mp_malloc_sync 00:04:23.547 EAL: No shared files mode enabled, IPC is disabled 00:04:23.547 EAL: Heap on socket 0 was expanded by 66MB 00:04:23.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.547 EAL: request: mp_malloc_sync 00:04:23.547 EAL: No shared files mode enabled, IPC is disabled 00:04:23.547 EAL: Heap on socket 0 was shrunk by 66MB 00:04:23.805 EAL: Trying to obtain current memory policy. 00:04:23.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.805 EAL: Restoring previous memory policy: 4 00:04:23.805 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.805 EAL: request: mp_malloc_sync 00:04:23.805 EAL: No shared files mode enabled, IPC is disabled 00:04:23.805 EAL: Heap on socket 0 was expanded by 130MB 00:04:24.064 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.064 EAL: request: mp_malloc_sync 00:04:24.064 EAL: No shared files mode enabled, IPC is disabled 00:04:24.064 EAL: Heap on socket 0 was shrunk by 130MB 00:04:24.323 EAL: Trying to obtain current memory policy. 00:04:24.323 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.323 EAL: Restoring previous memory policy: 4 00:04:24.323 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.323 EAL: request: mp_malloc_sync 00:04:24.323 EAL: No shared files mode enabled, IPC is disabled 00:04:24.323 EAL: Heap on socket 0 was expanded by 258MB 00:04:24.582 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.582 EAL: request: mp_malloc_sync 00:04:24.582 EAL: No shared files mode enabled, IPC is disabled 00:04:24.582 EAL: Heap on socket 0 was shrunk by 258MB 00:04:25.151 EAL: Trying to obtain current memory policy. 00:04:25.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.151 EAL: Restoring previous memory policy: 4 00:04:25.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.151 EAL: request: mp_malloc_sync 00:04:25.151 EAL: No shared files mode enabled, IPC is disabled 00:04:25.151 EAL: Heap on socket 0 was expanded by 514MB 00:04:26.158 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.158 EAL: request: mp_malloc_sync 00:04:26.158 EAL: No shared files mode enabled, IPC is disabled 00:04:26.158 EAL: Heap on socket 0 was shrunk by 514MB 00:04:26.786 EAL: Trying to obtain current memory policy. 00:04:26.786 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.046 EAL: Restoring previous memory policy: 4 00:04:27.046 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.046 EAL: request: mp_malloc_sync 00:04:27.046 EAL: No shared files mode enabled, IPC is disabled 00:04:27.046 EAL: Heap on socket 0 was expanded by 1026MB 00:04:28.425 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.685 EAL: request: mp_malloc_sync 00:04:28.685 EAL: No shared files mode enabled, IPC is disabled 00:04:28.685 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:30.587 passed 00:04:30.587 00:04:30.587 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.587 suites 1 1 n/a 0 0 00:04:30.587 tests 2 2 2 0 0 00:04:30.587 asserts 497 497 497 0 n/a 00:04:30.587 00:04:30.587 Elapsed time = 7.242 seconds 00:04:30.587 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.587 EAL: request: mp_malloc_sync 00:04:30.587 EAL: No shared files mode enabled, IPC is disabled 00:04:30.587 EAL: Heap on socket 0 was shrunk by 2MB 00:04:30.587 EAL: No shared files mode enabled, IPC is disabled 00:04:30.587 EAL: No shared files mode enabled, IPC is disabled 00:04:30.587 EAL: No shared files mode enabled, IPC is disabled 00:04:30.587 00:04:30.587 real 0m7.529s 00:04:30.587 user 0m6.637s 00:04:30.587 sys 0m0.839s 00:04:30.587 05:21:26 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.587 05:21:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:30.587 ************************************ 00:04:30.587 END TEST env_vtophys 00:04:30.587 ************************************ 00:04:30.587 05:21:26 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:30.587 05:21:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.587 05:21:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.587 05:21:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.587 ************************************ 00:04:30.587 START TEST env_pci 00:04:30.587 ************************************ 00:04:30.587 05:21:26 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:30.587 00:04:30.587 00:04:30.587 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.587 http://cunit.sourceforge.net/ 00:04:30.587 00:04:30.587 00:04:30.587 Suite: pci 00:04:30.587 Test: pci_hook ...[2024-11-27 05:21:26.879370] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3126707 has claimed it 00:04:30.587 EAL: Cannot find device (10000:00:01.0) 00:04:30.587 EAL: Failed to attach device on primary process 00:04:30.587 passed 00:04:30.587 00:04:30.587 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.587 suites 1 1 n/a 0 0 00:04:30.587 tests 1 1 1 0 0 00:04:30.587 asserts 25 25 25 0 n/a 00:04:30.587 00:04:30.587 Elapsed time = 0.064 seconds 00:04:30.587 00:04:30.587 real 0m0.155s 00:04:30.587 user 0m0.053s 00:04:30.587 sys 0m0.101s 00:04:30.587 05:21:26 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.587 05:21:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:30.587 ************************************ 00:04:30.587 END TEST env_pci 00:04:30.587 ************************************ 00:04:30.587 05:21:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:30.587 05:21:27 env -- env/env.sh@15 -- # uname 00:04:30.587 05:21:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:30.587 05:21:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:30.587 05:21:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.587 05:21:27 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:30.587 05:21:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.587 05:21:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.587 ************************************ 00:04:30.587 START TEST env_dpdk_post_init 00:04:30.587 ************************************ 00:04:30.587 05:21:27 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.587 EAL: Detected CPU lcores: 112 00:04:30.587 EAL: Detected NUMA nodes: 2 00:04:30.587 EAL: Detected shared linkage of DPDK 00:04:30.587 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.847 EAL: Selected IOVA mode 'VA' 00:04:30.847 EAL: VFIO support initialized 00:04:30.847 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.847 EAL: Using IOMMU type 1 (Type 1) 00:04:30.847 EAL: Ignore mapping IO port bar(1) 00:04:30.847 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:30.847 EAL: Ignore mapping IO port bar(1) 00:04:30.847 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:30.847 EAL: Ignore mapping IO port bar(1) 00:04:30.847 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:30.847 EAL: Ignore mapping IO port bar(1) 00:04:30.847 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:30.847 EAL: Ignore mapping IO port bar(1) 00:04:30.847 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:30.847 EAL: Ignore mapping IO port bar(1) 00:04:30.847 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:31.106 EAL: Ignore mapping IO port bar(1) 00:04:31.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:31.106 EAL: Ignore mapping IO port bar(1) 00:04:31.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:31.106 EAL: Ignore mapping IO port bar(1) 00:04:31.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:31.106 EAL: Ignore mapping IO port bar(1) 00:04:31.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:31.106 EAL: Ignore mapping IO port bar(1) 00:04:31.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:31.106 EAL: Ignore mapping IO port bar(1) 00:04:31.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:31.106 EAL: Ignore mapping IO port bar(1) 00:04:31.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:31.106 EAL: Ignore mapping IO port bar(1) 00:04:31.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:31.106 EAL: Ignore mapping IO port bar(1) 00:04:31.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:31.106 EAL: Ignore mapping IO port bar(1) 00:04:31.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:32.044 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:36.233 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:36.233 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:36.233 Starting DPDK initialization... 00:04:36.233 Starting SPDK post initialization... 00:04:36.233 SPDK NVMe probe 00:04:36.233 Attaching to 0000:d8:00.0 00:04:36.233 Attached to 0000:d8:00.0 00:04:36.233 Cleaning up... 00:04:36.233 00:04:36.233 real 0m5.501s 00:04:36.233 user 0m3.874s 00:04:36.233 sys 0m0.683s 00:04:36.233 05:21:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.233 05:21:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.233 ************************************ 00:04:36.233 END TEST env_dpdk_post_init 00:04:36.233 ************************************ 00:04:36.233 05:21:32 env -- env/env.sh@26 -- # uname 00:04:36.233 05:21:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.233 05:21:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.233 05:21:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.233 05:21:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.233 05:21:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.233 ************************************ 00:04:36.233 START TEST env_mem_callbacks 00:04:36.233 ************************************ 00:04:36.233 05:21:32 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.234 EAL: Detected CPU lcores: 112 00:04:36.234 EAL: Detected NUMA nodes: 2 00:04:36.234 EAL: Detected shared linkage of DPDK 00:04:36.234 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.234 EAL: Selected IOVA mode 'VA' 00:04:36.234 EAL: VFIO support initialized 00:04:36.234 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.234 00:04:36.234 00:04:36.234 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.234 http://cunit.sourceforge.net/ 00:04:36.234 00:04:36.234 00:04:36.234 Suite: memory 00:04:36.234 Test: test ... 00:04:36.234 register 0x200000200000 2097152 00:04:36.234 malloc 3145728 00:04:36.234 register 0x200000400000 4194304 00:04:36.234 buf 0x2000004fffc0 len 3145728 PASSED 00:04:36.234 malloc 64 00:04:36.234 buf 0x2000004ffec0 len 64 PASSED 00:04:36.234 malloc 4194304 00:04:36.234 register 0x200000800000 6291456 00:04:36.234 buf 0x2000009fffc0 len 4194304 PASSED 00:04:36.234 free 0x2000004fffc0 3145728 00:04:36.234 free 0x2000004ffec0 64 00:04:36.234 unregister 0x200000400000 4194304 PASSED 00:04:36.234 free 0x2000009fffc0 4194304 00:04:36.493 unregister 0x200000800000 6291456 PASSED 00:04:36.493 malloc 8388608 00:04:36.493 register 0x200000400000 10485760 00:04:36.493 buf 0x2000005fffc0 len 8388608 PASSED 00:04:36.493 free 0x2000005fffc0 8388608 00:04:36.493 unregister 0x200000400000 10485760 PASSED 00:04:36.493 passed 00:04:36.493 00:04:36.493 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.493 suites 1 1 n/a 0 0 00:04:36.493 tests 1 1 1 0 0 00:04:36.493 asserts 15 15 15 0 n/a 00:04:36.493 00:04:36.493 Elapsed time = 0.060 seconds 00:04:36.493 00:04:36.493 real 0m0.195s 00:04:36.493 user 0m0.095s 00:04:36.493 sys 0m0.099s 00:04:36.493 05:21:32 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.493 05:21:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:36.493 ************************************ 00:04:36.493 END TEST env_mem_callbacks 00:04:36.493 ************************************ 00:04:36.493 00:04:36.493 real 0m14.244s 00:04:36.493 user 0m11.110s 00:04:36.493 sys 0m2.185s 00:04:36.493 05:21:32 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.493 05:21:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.493 ************************************ 00:04:36.493 END TEST env 00:04:36.493 ************************************ 00:04:36.493 05:21:32 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.493 05:21:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.493 05:21:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.493 05:21:32 -- common/autotest_common.sh@10 -- # set +x 00:04:36.493 ************************************ 00:04:36.493 START TEST rpc 00:04:36.493 ************************************ 00:04:36.493 05:21:32 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.493 * Looking for test storage... 00:04:36.493 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:36.493 05:21:33 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.493 05:21:33 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.493 05:21:33 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.753 05:21:33 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.753 05:21:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.753 05:21:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.753 05:21:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.753 05:21:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.753 05:21:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.753 05:21:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.753 05:21:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.753 05:21:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.753 05:21:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.753 05:21:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.753 05:21:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.753 05:21:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.754 05:21:33 rpc -- scripts/common.sh@345 -- # : 1 00:04:36.754 05:21:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.754 05:21:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.754 05:21:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.754 05:21:33 rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.754 05:21:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.754 05:21:33 rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.754 05:21:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.754 05:21:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.754 05:21:33 rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.754 05:21:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.754 05:21:33 rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.754 05:21:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.754 05:21:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.754 05:21:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.754 05:21:33 rpc -- scripts/common.sh@368 -- # return 0 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.754 --rc genhtml_branch_coverage=1 00:04:36.754 --rc genhtml_function_coverage=1 00:04:36.754 --rc genhtml_legend=1 00:04:36.754 --rc geninfo_all_blocks=1 00:04:36.754 --rc geninfo_unexecuted_blocks=1 00:04:36.754 00:04:36.754 ' 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.754 --rc genhtml_branch_coverage=1 00:04:36.754 --rc genhtml_function_coverage=1 00:04:36.754 --rc genhtml_legend=1 00:04:36.754 --rc geninfo_all_blocks=1 00:04:36.754 --rc geninfo_unexecuted_blocks=1 00:04:36.754 00:04:36.754 ' 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.754 --rc genhtml_branch_coverage=1 00:04:36.754 --rc genhtml_function_coverage=1 00:04:36.754 --rc genhtml_legend=1 00:04:36.754 --rc geninfo_all_blocks=1 00:04:36.754 --rc geninfo_unexecuted_blocks=1 00:04:36.754 00:04:36.754 ' 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.754 --rc genhtml_branch_coverage=1 00:04:36.754 --rc genhtml_function_coverage=1 00:04:36.754 --rc genhtml_legend=1 00:04:36.754 --rc geninfo_all_blocks=1 00:04:36.754 --rc geninfo_unexecuted_blocks=1 00:04:36.754 00:04:36.754 ' 00:04:36.754 05:21:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3127949 00:04:36.754 05:21:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.754 05:21:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3127949 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 3127949 ']' 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.754 05:21:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.754 05:21:33 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:36.754 [2024-11-27 05:21:33.223749] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:36.754 [2024-11-27 05:21:33.223848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127949 ] 00:04:37.012 [2024-11-27 05:21:33.376176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.012 [2024-11-27 05:21:33.470358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:37.012 [2024-11-27 05:21:33.470402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3127949' to capture a snapshot of events at runtime. 00:04:37.012 [2024-11-27 05:21:33.470416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:37.012 [2024-11-27 05:21:33.470427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:37.012 [2024-11-27 05:21:33.470442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3127949 for offline analysis/debug. 00:04:37.012 [2024-11-27 05:21:33.471867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.946 05:21:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.946 05:21:34 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:37.946 05:21:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:37.946 05:21:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:37.946 05:21:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.946 05:21:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.946 05:21:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.946 05:21:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.946 05:21:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.946 ************************************ 00:04:37.946 START TEST rpc_integrity 00:04:37.946 ************************************ 00:04:37.946 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:37.946 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.946 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.946 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.946 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.946 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.946 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.946 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.946 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.946 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.946 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.946 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.946 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.946 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.946 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.946 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.946 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.946 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.946 { 00:04:37.946 "name": "Malloc0", 00:04:37.946 "aliases": [ 00:04:37.946 "b0c0a212-3c09-41df-9f8f-3b622c7538c4" 00:04:37.946 ], 00:04:37.946 "product_name": "Malloc disk", 00:04:37.946 "block_size": 512, 00:04:37.946 "num_blocks": 16384, 00:04:37.946 "uuid": "b0c0a212-3c09-41df-9f8f-3b622c7538c4", 00:04:37.946 "assigned_rate_limits": { 00:04:37.946 "rw_ios_per_sec": 0, 00:04:37.946 "rw_mbytes_per_sec": 0, 00:04:37.946 "r_mbytes_per_sec": 0, 00:04:37.946 "w_mbytes_per_sec": 0 00:04:37.946 }, 00:04:37.946 "claimed": false, 00:04:37.946 "zoned": false, 00:04:37.946 "supported_io_types": { 00:04:37.946 "read": true, 00:04:37.946 "write": true, 00:04:37.946 "unmap": true, 00:04:37.946 "flush": true, 00:04:37.946 "reset": true, 00:04:37.946 "nvme_admin": false, 00:04:37.946 "nvme_io": false, 00:04:37.946 "nvme_io_md": false, 00:04:37.946 "write_zeroes": true, 00:04:37.946 "zcopy": true, 00:04:37.946 "get_zone_info": false, 00:04:37.946 "zone_management": false, 00:04:37.946 "zone_append": false, 00:04:37.946 "compare": false, 00:04:37.946 "compare_and_write": false, 00:04:37.946 "abort": true, 00:04:37.946 "seek_hole": false, 00:04:37.946 "seek_data": false, 00:04:37.946 "copy": true, 00:04:37.946 "nvme_iov_md": false 00:04:37.946 }, 00:04:37.946 "memory_domains": [ 00:04:37.946 { 00:04:37.946 "dma_device_id": "system", 00:04:37.946 "dma_device_type": 1 00:04:37.946 }, 00:04:37.946 { 00:04:37.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.946 "dma_device_type": 2 00:04:37.946 } 00:04:37.947 ], 00:04:37.947 "driver_specific": {} 00:04:37.947 } 00:04:37.947 ]' 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.947 [2024-11-27 05:21:34.393307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.947 [2024-11-27 05:21:34.393352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.947 [2024-11-27 05:21:34.393374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021680 00:04:37.947 [2024-11-27 05:21:34.393388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.947 [2024-11-27 05:21:34.395522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.947 [2024-11-27 05:21:34.395550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.947 Passthru0 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.947 { 00:04:37.947 "name": "Malloc0", 00:04:37.947 "aliases": [ 00:04:37.947 "b0c0a212-3c09-41df-9f8f-3b622c7538c4" 00:04:37.947 ], 00:04:37.947 "product_name": "Malloc disk", 00:04:37.947 "block_size": 512, 00:04:37.947 "num_blocks": 16384, 00:04:37.947 "uuid": "b0c0a212-3c09-41df-9f8f-3b622c7538c4", 00:04:37.947 "assigned_rate_limits": { 00:04:37.947 "rw_ios_per_sec": 0, 00:04:37.947 "rw_mbytes_per_sec": 0, 00:04:37.947 "r_mbytes_per_sec": 0, 00:04:37.947 "w_mbytes_per_sec": 0 00:04:37.947 }, 00:04:37.947 "claimed": true, 00:04:37.947 "claim_type": "exclusive_write", 00:04:37.947 "zoned": false, 00:04:37.947 "supported_io_types": { 00:04:37.947 "read": true, 00:04:37.947 "write": true, 00:04:37.947 "unmap": true, 00:04:37.947 "flush": true, 00:04:37.947 "reset": true, 00:04:37.947 "nvme_admin": false, 00:04:37.947 "nvme_io": false, 00:04:37.947 "nvme_io_md": false, 00:04:37.947 "write_zeroes": true, 00:04:37.947 "zcopy": true, 00:04:37.947 "get_zone_info": false, 00:04:37.947 "zone_management": false, 00:04:37.947 "zone_append": false, 00:04:37.947 "compare": false, 00:04:37.947 "compare_and_write": false, 00:04:37.947 "abort": true, 00:04:37.947 "seek_hole": false, 00:04:37.947 "seek_data": false, 00:04:37.947 "copy": true, 00:04:37.947 "nvme_iov_md": false 00:04:37.947 }, 00:04:37.947 "memory_domains": [ 00:04:37.947 { 00:04:37.947 "dma_device_id": "system", 00:04:37.947 "dma_device_type": 1 00:04:37.947 }, 00:04:37.947 { 00:04:37.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.947 "dma_device_type": 2 00:04:37.947 } 00:04:37.947 ], 00:04:37.947 "driver_specific": {} 00:04:37.947 }, 00:04:37.947 { 00:04:37.947 "name": "Passthru0", 00:04:37.947 "aliases": [ 00:04:37.947 "9f47da75-7f1a-5ab3-8f64-eb138ddae635" 00:04:37.947 ], 00:04:37.947 "product_name": "passthru", 00:04:37.947 "block_size": 512, 00:04:37.947 "num_blocks": 16384, 00:04:37.947 "uuid": "9f47da75-7f1a-5ab3-8f64-eb138ddae635", 00:04:37.947 "assigned_rate_limits": { 00:04:37.947 "rw_ios_per_sec": 0, 00:04:37.947 "rw_mbytes_per_sec": 0, 00:04:37.947 "r_mbytes_per_sec": 0, 00:04:37.947 "w_mbytes_per_sec": 0 00:04:37.947 }, 00:04:37.947 "claimed": false, 00:04:37.947 "zoned": false, 00:04:37.947 "supported_io_types": { 00:04:37.947 "read": true, 00:04:37.947 "write": true, 00:04:37.947 "unmap": true, 00:04:37.947 "flush": true, 00:04:37.947 "reset": true, 00:04:37.947 "nvme_admin": false, 00:04:37.947 "nvme_io": false, 00:04:37.947 "nvme_io_md": false, 00:04:37.947 "write_zeroes": true, 00:04:37.947 "zcopy": true, 00:04:37.947 "get_zone_info": false, 00:04:37.947 "zone_management": false, 00:04:37.947 "zone_append": false, 00:04:37.947 "compare": false, 00:04:37.947 "compare_and_write": false, 00:04:37.947 "abort": true, 00:04:37.947 "seek_hole": false, 00:04:37.947 "seek_data": false, 00:04:37.947 "copy": true, 00:04:37.947 "nvme_iov_md": false 00:04:37.947 }, 00:04:37.947 "memory_domains": [ 00:04:37.947 { 00:04:37.947 "dma_device_id": "system", 00:04:37.947 "dma_device_type": 1 00:04:37.947 }, 00:04:37.947 { 00:04:37.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.947 "dma_device_type": 2 00:04:37.947 } 00:04:37.947 ], 00:04:37.947 "driver_specific": { 00:04:37.947 "passthru": { 00:04:37.947 "name": "Passthru0", 00:04:37.947 "base_bdev_name": "Malloc0" 00:04:37.947 } 00:04:37.947 } 00:04:37.947 } 00:04:37.947 ]' 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.947 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.947 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.206 05:21:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.206 00:04:38.206 real 0m0.295s 00:04:38.206 user 0m0.159s 00:04:38.206 sys 0m0.042s 00:04:38.206 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.206 05:21:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.206 ************************************ 00:04:38.206 END TEST rpc_integrity 00:04:38.206 ************************************ 00:04:38.206 05:21:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:38.206 05:21:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.206 05:21:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.206 05:21:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.206 ************************************ 00:04:38.206 START TEST rpc_plugins 00:04:38.206 ************************************ 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:38.206 { 00:04:38.206 "name": "Malloc1", 00:04:38.206 "aliases": [ 00:04:38.206 "eb820b5f-8c09-4934-9362-fa834e00db88" 00:04:38.206 ], 00:04:38.206 "product_name": "Malloc disk", 00:04:38.206 "block_size": 4096, 00:04:38.206 "num_blocks": 256, 00:04:38.206 "uuid": "eb820b5f-8c09-4934-9362-fa834e00db88", 00:04:38.206 "assigned_rate_limits": { 00:04:38.206 "rw_ios_per_sec": 0, 00:04:38.206 "rw_mbytes_per_sec": 0, 00:04:38.206 "r_mbytes_per_sec": 0, 00:04:38.206 "w_mbytes_per_sec": 0 00:04:38.206 }, 00:04:38.206 "claimed": false, 00:04:38.206 "zoned": false, 00:04:38.206 "supported_io_types": { 00:04:38.206 "read": true, 00:04:38.206 "write": true, 00:04:38.206 "unmap": true, 00:04:38.206 "flush": true, 00:04:38.206 "reset": true, 00:04:38.206 "nvme_admin": false, 00:04:38.206 "nvme_io": false, 00:04:38.206 "nvme_io_md": false, 00:04:38.206 "write_zeroes": true, 00:04:38.206 "zcopy": true, 00:04:38.206 "get_zone_info": false, 00:04:38.206 "zone_management": false, 00:04:38.206 "zone_append": false, 00:04:38.206 "compare": false, 00:04:38.206 "compare_and_write": false, 00:04:38.206 "abort": true, 00:04:38.206 "seek_hole": false, 00:04:38.206 "seek_data": false, 00:04:38.206 "copy": true, 00:04:38.206 "nvme_iov_md": false 00:04:38.206 }, 00:04:38.206 "memory_domains": [ 00:04:38.206 { 00:04:38.206 "dma_device_id": "system", 00:04:38.206 "dma_device_type": 1 00:04:38.206 }, 00:04:38.206 { 00:04:38.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.206 "dma_device_type": 2 00:04:38.206 } 00:04:38.206 ], 00:04:38.206 "driver_specific": {} 00:04:38.206 } 00:04:38.206 ]' 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:38.206 05:21:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:38.206 00:04:38.206 real 0m0.147s 00:04:38.206 user 0m0.086s 00:04:38.206 sys 0m0.021s 00:04:38.206 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.207 05:21:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.207 ************************************ 00:04:38.207 END TEST rpc_plugins 00:04:38.207 ************************************ 00:04:38.465 05:21:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:38.465 05:21:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.465 05:21:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.465 05:21:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.465 ************************************ 00:04:38.465 START TEST rpc_trace_cmd_test 00:04:38.465 ************************************ 00:04:38.465 05:21:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:38.465 05:21:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:38.465 05:21:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:38.465 05:21:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.465 05:21:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.465 05:21:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.465 05:21:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:38.465 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3127949", 00:04:38.465 "tpoint_group_mask": "0x8", 00:04:38.465 "iscsi_conn": { 00:04:38.465 "mask": "0x2", 00:04:38.465 "tpoint_mask": "0x0" 00:04:38.465 }, 00:04:38.465 "scsi": { 00:04:38.466 "mask": "0x4", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "bdev": { 00:04:38.466 "mask": "0x8", 00:04:38.466 "tpoint_mask": "0xffffffffffffffff" 00:04:38.466 }, 00:04:38.466 "nvmf_rdma": { 00:04:38.466 "mask": "0x10", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "nvmf_tcp": { 00:04:38.466 "mask": "0x20", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "ftl": { 00:04:38.466 "mask": "0x40", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "blobfs": { 00:04:38.466 "mask": "0x80", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "dsa": { 00:04:38.466 "mask": "0x200", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "thread": { 00:04:38.466 "mask": "0x400", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "nvme_pcie": { 00:04:38.466 "mask": "0x800", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "iaa": { 00:04:38.466 "mask": "0x1000", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "nvme_tcp": { 00:04:38.466 "mask": "0x2000", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "bdev_nvme": { 00:04:38.466 "mask": "0x4000", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "sock": { 00:04:38.466 "mask": "0x8000", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "blob": { 00:04:38.466 "mask": "0x10000", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "bdev_raid": { 00:04:38.466 "mask": "0x20000", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 }, 00:04:38.466 "scheduler": { 00:04:38.466 "mask": "0x40000", 00:04:38.466 "tpoint_mask": "0x0" 00:04:38.466 } 00:04:38.466 }' 00:04:38.466 05:21:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:38.466 05:21:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:38.466 05:21:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:38.466 05:21:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:38.466 05:21:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:38.466 05:21:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:38.466 05:21:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:38.466 05:21:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:38.466 05:21:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:38.466 05:21:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:38.466 00:04:38.466 real 0m0.193s 00:04:38.466 user 0m0.155s 00:04:38.466 sys 0m0.031s 00:04:38.466 05:21:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.466 05:21:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.466 ************************************ 00:04:38.466 END TEST rpc_trace_cmd_test 00:04:38.466 ************************************ 00:04:38.725 05:21:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:38.725 05:21:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:38.725 05:21:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:38.725 05:21:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.725 05:21:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.725 05:21:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.725 ************************************ 00:04:38.725 START TEST rpc_daemon_integrity 00:04:38.725 ************************************ 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.725 { 00:04:38.725 "name": "Malloc2", 00:04:38.725 "aliases": [ 00:04:38.725 "3867ff0d-133f-4740-93e5-1ae1f91c7b98" 00:04:38.725 ], 00:04:38.725 "product_name": "Malloc disk", 00:04:38.725 "block_size": 512, 00:04:38.725 "num_blocks": 16384, 00:04:38.725 "uuid": "3867ff0d-133f-4740-93e5-1ae1f91c7b98", 00:04:38.725 "assigned_rate_limits": { 00:04:38.725 "rw_ios_per_sec": 0, 00:04:38.725 "rw_mbytes_per_sec": 0, 00:04:38.725 "r_mbytes_per_sec": 0, 00:04:38.725 "w_mbytes_per_sec": 0 00:04:38.725 }, 00:04:38.725 "claimed": false, 00:04:38.725 "zoned": false, 00:04:38.725 "supported_io_types": { 00:04:38.725 "read": true, 00:04:38.725 "write": true, 00:04:38.725 "unmap": true, 00:04:38.725 "flush": true, 00:04:38.725 "reset": true, 00:04:38.725 "nvme_admin": false, 00:04:38.725 "nvme_io": false, 00:04:38.725 "nvme_io_md": false, 00:04:38.725 "write_zeroes": true, 00:04:38.725 "zcopy": true, 00:04:38.725 "get_zone_info": false, 00:04:38.725 "zone_management": false, 00:04:38.725 "zone_append": false, 00:04:38.725 "compare": false, 00:04:38.725 "compare_and_write": false, 00:04:38.725 "abort": true, 00:04:38.725 "seek_hole": false, 00:04:38.725 "seek_data": false, 00:04:38.725 "copy": true, 00:04:38.725 "nvme_iov_md": false 00:04:38.725 }, 00:04:38.725 "memory_domains": [ 00:04:38.725 { 00:04:38.725 "dma_device_id": "system", 00:04:38.725 "dma_device_type": 1 00:04:38.725 }, 00:04:38.725 { 00:04:38.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.725 "dma_device_type": 2 00:04:38.725 } 00:04:38.725 ], 00:04:38.725 "driver_specific": {} 00:04:38.725 } 00:04:38.725 ]' 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.725 [2024-11-27 05:21:35.259303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.725 [2024-11-27 05:21:35.259341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.725 [2024-11-27 05:21:35.259362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:38.725 [2024-11-27 05:21:35.259373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.725 [2024-11-27 05:21:35.261473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.725 [2024-11-27 05:21:35.261499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.725 Passthru0 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.725 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.725 { 00:04:38.725 "name": "Malloc2", 00:04:38.725 "aliases": [ 00:04:38.725 "3867ff0d-133f-4740-93e5-1ae1f91c7b98" 00:04:38.725 ], 00:04:38.725 "product_name": "Malloc disk", 00:04:38.725 "block_size": 512, 00:04:38.725 "num_blocks": 16384, 00:04:38.725 "uuid": "3867ff0d-133f-4740-93e5-1ae1f91c7b98", 00:04:38.725 "assigned_rate_limits": { 00:04:38.725 "rw_ios_per_sec": 0, 00:04:38.725 "rw_mbytes_per_sec": 0, 00:04:38.725 "r_mbytes_per_sec": 0, 00:04:38.725 "w_mbytes_per_sec": 0 00:04:38.725 }, 00:04:38.725 "claimed": true, 00:04:38.725 "claim_type": "exclusive_write", 00:04:38.725 "zoned": false, 00:04:38.725 "supported_io_types": { 00:04:38.725 "read": true, 00:04:38.725 "write": true, 00:04:38.725 "unmap": true, 00:04:38.725 "flush": true, 00:04:38.726 "reset": true, 00:04:38.726 "nvme_admin": false, 00:04:38.726 "nvme_io": false, 00:04:38.726 "nvme_io_md": false, 00:04:38.726 "write_zeroes": true, 00:04:38.726 "zcopy": true, 00:04:38.726 "get_zone_info": false, 00:04:38.726 "zone_management": false, 00:04:38.726 "zone_append": false, 00:04:38.726 "compare": false, 00:04:38.726 "compare_and_write": false, 00:04:38.726 "abort": true, 00:04:38.726 "seek_hole": false, 00:04:38.726 "seek_data": false, 00:04:38.726 "copy": true, 00:04:38.726 "nvme_iov_md": false 00:04:38.726 }, 00:04:38.726 "memory_domains": [ 00:04:38.726 { 00:04:38.726 "dma_device_id": "system", 00:04:38.726 "dma_device_type": 1 00:04:38.726 }, 00:04:38.726 { 00:04:38.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.726 "dma_device_type": 2 00:04:38.726 } 00:04:38.726 ], 00:04:38.726 "driver_specific": {} 00:04:38.726 }, 00:04:38.726 { 00:04:38.726 "name": "Passthru0", 00:04:38.726 "aliases": [ 00:04:38.726 "531eb594-0039-5363-aff6-c542b8fcfa3b" 00:04:38.726 ], 00:04:38.726 "product_name": "passthru", 00:04:38.726 "block_size": 512, 00:04:38.726 "num_blocks": 16384, 00:04:38.726 "uuid": "531eb594-0039-5363-aff6-c542b8fcfa3b", 00:04:38.726 "assigned_rate_limits": { 00:04:38.726 "rw_ios_per_sec": 0, 00:04:38.726 "rw_mbytes_per_sec": 0, 00:04:38.726 "r_mbytes_per_sec": 0, 00:04:38.726 "w_mbytes_per_sec": 0 00:04:38.726 }, 00:04:38.726 "claimed": false, 00:04:38.726 "zoned": false, 00:04:38.726 "supported_io_types": { 00:04:38.726 "read": true, 00:04:38.726 "write": true, 00:04:38.726 "unmap": true, 00:04:38.726 "flush": true, 00:04:38.726 "reset": true, 00:04:38.726 "nvme_admin": false, 00:04:38.726 "nvme_io": false, 00:04:38.726 "nvme_io_md": false, 00:04:38.726 "write_zeroes": true, 00:04:38.726 "zcopy": true, 00:04:38.726 "get_zone_info": false, 00:04:38.726 "zone_management": false, 00:04:38.726 "zone_append": false, 00:04:38.726 "compare": false, 00:04:38.726 "compare_and_write": false, 00:04:38.726 "abort": true, 00:04:38.726 "seek_hole": false, 00:04:38.726 "seek_data": false, 00:04:38.726 "copy": true, 00:04:38.726 "nvme_iov_md": false 00:04:38.726 }, 00:04:38.726 "memory_domains": [ 00:04:38.726 { 00:04:38.726 "dma_device_id": "system", 00:04:38.726 "dma_device_type": 1 00:04:38.726 }, 00:04:38.726 { 00:04:38.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.726 "dma_device_type": 2 00:04:38.726 } 00:04:38.726 ], 00:04:38.726 "driver_specific": { 00:04:38.726 "passthru": { 00:04:38.726 "name": "Passthru0", 00:04:38.726 "base_bdev_name": "Malloc2" 00:04:38.726 } 00:04:38.726 } 00:04:38.726 } 00:04:38.726 ]' 00:04:38.726 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.984 00:04:38.984 real 0m0.295s 00:04:38.984 user 0m0.164s 00:04:38.984 sys 0m0.039s 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.984 05:21:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.984 ************************************ 00:04:38.984 END TEST rpc_daemon_integrity 00:04:38.984 ************************************ 00:04:38.984 05:21:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.984 05:21:35 rpc -- rpc/rpc.sh@84 -- # killprocess 3127949 00:04:38.984 05:21:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 3127949 ']' 00:04:38.984 05:21:35 rpc -- common/autotest_common.sh@958 -- # kill -0 3127949 00:04:38.984 05:21:35 rpc -- common/autotest_common.sh@959 -- # uname 00:04:38.984 05:21:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.984 05:21:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3127949 00:04:38.984 05:21:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.984 05:21:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.984 05:21:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3127949' 00:04:38.984 killing process with pid 3127949 00:04:38.984 05:21:35 rpc -- common/autotest_common.sh@973 -- # kill 3127949 00:04:38.984 05:21:35 rpc -- common/autotest_common.sh@978 -- # wait 3127949 00:04:41.516 00:04:41.516 real 0m4.749s 00:04:41.516 user 0m5.196s 00:04:41.516 sys 0m0.985s 00:04:41.516 05:21:37 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.516 05:21:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.516 ************************************ 00:04:41.516 END TEST rpc 00:04:41.516 ************************************ 00:04:41.516 05:21:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.516 05:21:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.516 05:21:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.516 05:21:37 -- common/autotest_common.sh@10 -- # set +x 00:04:41.516 ************************************ 00:04:41.516 START TEST skip_rpc 00:04:41.516 ************************************ 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.516 * Looking for test storage... 00:04:41.516 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.516 05:21:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.516 --rc genhtml_branch_coverage=1 00:04:41.516 --rc genhtml_function_coverage=1 00:04:41.516 --rc genhtml_legend=1 00:04:41.516 --rc geninfo_all_blocks=1 00:04:41.516 --rc geninfo_unexecuted_blocks=1 00:04:41.516 00:04:41.516 ' 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.516 --rc genhtml_branch_coverage=1 00:04:41.516 --rc genhtml_function_coverage=1 00:04:41.516 --rc genhtml_legend=1 00:04:41.516 --rc geninfo_all_blocks=1 00:04:41.516 --rc geninfo_unexecuted_blocks=1 00:04:41.516 00:04:41.516 ' 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.516 --rc genhtml_branch_coverage=1 00:04:41.516 --rc genhtml_function_coverage=1 00:04:41.516 --rc genhtml_legend=1 00:04:41.516 --rc geninfo_all_blocks=1 00:04:41.516 --rc geninfo_unexecuted_blocks=1 00:04:41.516 00:04:41.516 ' 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.516 --rc genhtml_branch_coverage=1 00:04:41.516 --rc genhtml_function_coverage=1 00:04:41.516 --rc genhtml_legend=1 00:04:41.516 --rc geninfo_all_blocks=1 00:04:41.516 --rc geninfo_unexecuted_blocks=1 00:04:41.516 00:04:41.516 ' 00:04:41.516 05:21:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:41.516 05:21:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:41.516 05:21:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.516 05:21:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.516 ************************************ 00:04:41.516 START TEST skip_rpc 00:04:41.516 ************************************ 00:04:41.516 05:21:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:41.516 05:21:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3128937 00:04:41.516 05:21:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.516 05:21:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.516 05:21:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.773 [2024-11-27 05:21:38.122900] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:41.773 [2024-11-27 05:21:38.122980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128937 ] 00:04:41.773 [2024-11-27 05:21:38.271370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.033 [2024-11-27 05:21:38.365138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3128937 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3128937 ']' 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3128937 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128937 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128937' 00:04:47.298 killing process with pid 3128937 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3128937 00:04:47.298 05:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3128937 00:04:49.203 00:04:49.203 real 0m7.288s 00:04:49.203 user 0m6.870s 00:04:49.203 sys 0m0.465s 00:04:49.203 05:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.203 05:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.203 ************************************ 00:04:49.203 END TEST skip_rpc 00:04:49.203 ************************************ 00:04:49.203 05:21:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:49.203 05:21:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.203 05:21:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.203 05:21:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.203 ************************************ 00:04:49.203 START TEST skip_rpc_with_json 00:04:49.203 ************************************ 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3130289 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3130289 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3130289 ']' 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.203 05:21:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.203 [2024-11-27 05:21:45.500027] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:49.203 [2024-11-27 05:21:45.500137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130289 ] 00:04:49.203 [2024-11-27 05:21:45.653525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.203 [2024-11-27 05:21:45.749289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.137 [2024-11-27 05:21:46.488131] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:50.137 request: 00:04:50.137 { 00:04:50.137 "trtype": "tcp", 00:04:50.137 "method": "nvmf_get_transports", 00:04:50.137 "req_id": 1 00:04:50.137 } 00:04:50.137 Got JSON-RPC error response 00:04:50.137 response: 00:04:50.137 { 00:04:50.137 "code": -19, 00:04:50.137 "message": "No such device" 00:04:50.137 } 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.137 [2024-11-27 05:21:46.500257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.137 05:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:50.137 { 00:04:50.137 "subsystems": [ 00:04:50.137 { 00:04:50.137 "subsystem": "fsdev", 00:04:50.137 "config": [ 00:04:50.137 { 00:04:50.137 "method": "fsdev_set_opts", 00:04:50.137 "params": { 00:04:50.137 "fsdev_io_pool_size": 65535, 00:04:50.137 "fsdev_io_cache_size": 256 00:04:50.137 } 00:04:50.137 } 00:04:50.137 ] 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "subsystem": "keyring", 00:04:50.137 "config": [] 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "subsystem": "iobuf", 00:04:50.137 "config": [ 00:04:50.137 { 00:04:50.137 "method": "iobuf_set_options", 00:04:50.137 "params": { 00:04:50.137 "small_pool_count": 8192, 00:04:50.137 "large_pool_count": 1024, 00:04:50.137 "small_bufsize": 8192, 00:04:50.137 "large_bufsize": 135168, 00:04:50.137 "enable_numa": false 00:04:50.137 } 00:04:50.137 } 00:04:50.137 ] 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "subsystem": "sock", 00:04:50.137 "config": [ 00:04:50.137 { 00:04:50.137 "method": "sock_set_default_impl", 00:04:50.137 "params": { 00:04:50.137 "impl_name": "posix" 00:04:50.137 } 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "method": "sock_impl_set_options", 00:04:50.137 "params": { 00:04:50.137 "impl_name": "ssl", 00:04:50.137 "recv_buf_size": 4096, 00:04:50.137 "send_buf_size": 4096, 00:04:50.137 "enable_recv_pipe": true, 00:04:50.137 "enable_quickack": false, 00:04:50.137 "enable_placement_id": 0, 00:04:50.137 "enable_zerocopy_send_server": true, 00:04:50.137 "enable_zerocopy_send_client": false, 00:04:50.137 "zerocopy_threshold": 0, 00:04:50.137 "tls_version": 0, 00:04:50.137 "enable_ktls": false 00:04:50.137 } 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "method": "sock_impl_set_options", 00:04:50.137 "params": { 00:04:50.137 "impl_name": "posix", 00:04:50.137 "recv_buf_size": 2097152, 00:04:50.137 "send_buf_size": 2097152, 00:04:50.137 "enable_recv_pipe": true, 00:04:50.137 "enable_quickack": false, 00:04:50.137 "enable_placement_id": 0, 00:04:50.137 "enable_zerocopy_send_server": true, 00:04:50.137 "enable_zerocopy_send_client": false, 00:04:50.137 "zerocopy_threshold": 0, 00:04:50.137 "tls_version": 0, 00:04:50.137 "enable_ktls": false 00:04:50.137 } 00:04:50.137 } 00:04:50.137 ] 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "subsystem": "vmd", 00:04:50.137 "config": [] 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "subsystem": "accel", 00:04:50.137 "config": [ 00:04:50.137 { 00:04:50.137 "method": "accel_set_options", 00:04:50.137 "params": { 00:04:50.137 "small_cache_size": 128, 00:04:50.137 "large_cache_size": 16, 00:04:50.137 "task_count": 2048, 00:04:50.137 "sequence_count": 2048, 00:04:50.137 "buf_count": 2048 00:04:50.137 } 00:04:50.137 } 00:04:50.137 ] 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "subsystem": "bdev", 00:04:50.137 "config": [ 00:04:50.137 { 00:04:50.137 "method": "bdev_set_options", 00:04:50.137 "params": { 00:04:50.137 "bdev_io_pool_size": 65535, 00:04:50.137 "bdev_io_cache_size": 256, 00:04:50.137 "bdev_auto_examine": true, 00:04:50.137 "iobuf_small_cache_size": 128, 00:04:50.137 "iobuf_large_cache_size": 16 00:04:50.137 } 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "method": "bdev_raid_set_options", 00:04:50.137 "params": { 00:04:50.137 "process_window_size_kb": 1024, 00:04:50.137 "process_max_bandwidth_mb_sec": 0 00:04:50.137 } 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "method": "bdev_iscsi_set_options", 00:04:50.137 "params": { 00:04:50.137 "timeout_sec": 30 00:04:50.137 } 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "method": "bdev_nvme_set_options", 00:04:50.137 "params": { 00:04:50.137 "action_on_timeout": "none", 00:04:50.137 "timeout_us": 0, 00:04:50.137 "timeout_admin_us": 0, 00:04:50.137 "keep_alive_timeout_ms": 10000, 00:04:50.137 "arbitration_burst": 0, 00:04:50.137 "low_priority_weight": 0, 00:04:50.137 "medium_priority_weight": 0, 00:04:50.137 "high_priority_weight": 0, 00:04:50.137 "nvme_adminq_poll_period_us": 10000, 00:04:50.137 "nvme_ioq_poll_period_us": 0, 00:04:50.137 "io_queue_requests": 0, 00:04:50.137 "delay_cmd_submit": true, 00:04:50.137 "transport_retry_count": 4, 00:04:50.137 "bdev_retry_count": 3, 00:04:50.137 "transport_ack_timeout": 0, 00:04:50.137 "ctrlr_loss_timeout_sec": 0, 00:04:50.137 "reconnect_delay_sec": 0, 00:04:50.137 "fast_io_fail_timeout_sec": 0, 00:04:50.137 "disable_auto_failback": false, 00:04:50.137 "generate_uuids": false, 00:04:50.137 "transport_tos": 0, 00:04:50.137 "nvme_error_stat": false, 00:04:50.137 "rdma_srq_size": 0, 00:04:50.137 "io_path_stat": false, 00:04:50.137 "allow_accel_sequence": false, 00:04:50.137 "rdma_max_cq_size": 0, 00:04:50.137 "rdma_cm_event_timeout_ms": 0, 00:04:50.137 "dhchap_digests": [ 00:04:50.137 "sha256", 00:04:50.137 "sha384", 00:04:50.137 "sha512" 00:04:50.137 ], 00:04:50.137 "dhchap_dhgroups": [ 00:04:50.137 "null", 00:04:50.137 "ffdhe2048", 00:04:50.137 "ffdhe3072", 00:04:50.137 "ffdhe4096", 00:04:50.137 "ffdhe6144", 00:04:50.137 "ffdhe8192" 00:04:50.137 ] 00:04:50.137 } 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "method": "bdev_nvme_set_hotplug", 00:04:50.137 "params": { 00:04:50.137 "period_us": 100000, 00:04:50.137 "enable": false 00:04:50.137 } 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "method": "bdev_wait_for_examine" 00:04:50.137 } 00:04:50.137 ] 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "subsystem": "scsi", 00:04:50.137 "config": null 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "subsystem": "scheduler", 00:04:50.137 "config": [ 00:04:50.137 { 00:04:50.137 "method": "framework_set_scheduler", 00:04:50.137 "params": { 00:04:50.137 "name": "static" 00:04:50.137 } 00:04:50.137 } 00:04:50.137 ] 00:04:50.137 }, 00:04:50.137 { 00:04:50.137 "subsystem": "vhost_scsi", 00:04:50.137 "config": [] 00:04:50.137 }, 00:04:50.137 { 00:04:50.138 "subsystem": "vhost_blk", 00:04:50.138 "config": [] 00:04:50.138 }, 00:04:50.138 { 00:04:50.138 "subsystem": "ublk", 00:04:50.138 "config": [] 00:04:50.138 }, 00:04:50.138 { 00:04:50.138 "subsystem": "nbd", 00:04:50.138 "config": [] 00:04:50.138 }, 00:04:50.138 { 00:04:50.138 "subsystem": "nvmf", 00:04:50.138 "config": [ 00:04:50.138 { 00:04:50.138 "method": "nvmf_set_config", 00:04:50.138 "params": { 00:04:50.138 "discovery_filter": "match_any", 00:04:50.138 "admin_cmd_passthru": { 00:04:50.138 "identify_ctrlr": false 00:04:50.138 }, 00:04:50.138 "dhchap_digests": [ 00:04:50.138 "sha256", 00:04:50.138 "sha384", 00:04:50.138 "sha512" 00:04:50.138 ], 00:04:50.138 "dhchap_dhgroups": [ 00:04:50.138 "null", 00:04:50.138 "ffdhe2048", 00:04:50.138 "ffdhe3072", 00:04:50.138 "ffdhe4096", 00:04:50.138 "ffdhe6144", 00:04:50.138 "ffdhe8192" 00:04:50.138 ] 00:04:50.138 } 00:04:50.138 }, 00:04:50.138 { 00:04:50.138 "method": "nvmf_set_max_subsystems", 00:04:50.138 "params": { 00:04:50.138 "max_subsystems": 1024 00:04:50.138 } 00:04:50.138 }, 00:04:50.138 { 00:04:50.138 "method": "nvmf_set_crdt", 00:04:50.138 "params": { 00:04:50.138 "crdt1": 0, 00:04:50.138 "crdt2": 0, 00:04:50.138 "crdt3": 0 00:04:50.138 } 00:04:50.138 }, 00:04:50.138 { 00:04:50.138 "method": "nvmf_create_transport", 00:04:50.138 "params": { 00:04:50.138 "trtype": "TCP", 00:04:50.138 "max_queue_depth": 128, 00:04:50.138 "max_io_qpairs_per_ctrlr": 127, 00:04:50.138 "in_capsule_data_size": 4096, 00:04:50.138 "max_io_size": 131072, 00:04:50.138 "io_unit_size": 131072, 00:04:50.138 "max_aq_depth": 128, 00:04:50.138 "num_shared_buffers": 511, 00:04:50.138 "buf_cache_size": 4294967295, 00:04:50.138 "dif_insert_or_strip": false, 00:04:50.138 "zcopy": false, 00:04:50.138 "c2h_success": true, 00:04:50.138 "sock_priority": 0, 00:04:50.138 "abort_timeout_sec": 1, 00:04:50.138 "ack_timeout": 0, 00:04:50.138 "data_wr_pool_size": 0 00:04:50.138 } 00:04:50.138 } 00:04:50.138 ] 00:04:50.138 }, 00:04:50.138 { 00:04:50.138 "subsystem": "iscsi", 00:04:50.138 "config": [ 00:04:50.138 { 00:04:50.138 "method": "iscsi_set_options", 00:04:50.138 "params": { 00:04:50.138 "node_base": "iqn.2016-06.io.spdk", 00:04:50.138 "max_sessions": 128, 00:04:50.138 "max_connections_per_session": 2, 00:04:50.138 "max_queue_depth": 64, 00:04:50.138 "default_time2wait": 2, 00:04:50.138 "default_time2retain": 20, 00:04:50.138 "first_burst_length": 8192, 00:04:50.138 "immediate_data": true, 00:04:50.138 "allow_duplicated_isid": false, 00:04:50.138 "error_recovery_level": 0, 00:04:50.138 "nop_timeout": 60, 00:04:50.138 "nop_in_interval": 30, 00:04:50.138 "disable_chap": false, 00:04:50.138 "require_chap": false, 00:04:50.138 "mutual_chap": false, 00:04:50.138 "chap_group": 0, 00:04:50.138 "max_large_datain_per_connection": 64, 00:04:50.138 "max_r2t_per_connection": 4, 00:04:50.138 "pdu_pool_size": 36864, 00:04:50.138 "immediate_data_pool_size": 16384, 00:04:50.138 "data_out_pool_size": 2048 00:04:50.138 } 00:04:50.138 } 00:04:50.138 ] 00:04:50.138 } 00:04:50.138 ] 00:04:50.138 } 00:04:50.138 05:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:50.138 05:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3130289 00:04:50.138 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3130289 ']' 00:04:50.138 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3130289 00:04:50.138 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:50.138 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.138 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130289 00:04:50.395 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.395 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.395 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130289' 00:04:50.395 killing process with pid 3130289 00:04:50.395 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3130289 00:04:50.395 05:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3130289 00:04:52.928 05:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3130841 00:04:52.928 05:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:52.928 05:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:58.231 05:21:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3130841 00:04:58.231 05:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3130841 ']' 00:04:58.231 05:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3130841 00:04:58.231 05:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:58.231 05:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.231 05:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130841 00:04:58.231 05:21:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.231 05:21:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.231 05:21:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130841' 00:04:58.231 killing process with pid 3130841 00:04:58.231 05:21:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3130841 00:04:58.231 05:21:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3130841 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:00.137 00:05:00.137 real 0m10.812s 00:05:00.137 user 0m10.282s 00:05:00.137 sys 0m1.030s 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.137 ************************************ 00:05:00.137 END TEST skip_rpc_with_json 00:05:00.137 ************************************ 00:05:00.137 05:21:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:00.137 05:21:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.137 05:21:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.137 05:21:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.137 ************************************ 00:05:00.137 START TEST skip_rpc_with_delay 00:05:00.137 ************************************ 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.137 [2024-11-27 05:21:56.391595] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.137 00:05:00.137 real 0m0.157s 00:05:00.137 user 0m0.070s 00:05:00.137 sys 0m0.086s 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.137 05:21:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:00.137 ************************************ 00:05:00.137 END TEST skip_rpc_with_delay 00:05:00.137 ************************************ 00:05:00.137 05:21:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:00.137 05:21:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:00.137 05:21:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:00.137 05:21:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.137 05:21:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.137 05:21:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.137 ************************************ 00:05:00.137 START TEST exit_on_failed_rpc_init 00:05:00.137 ************************************ 00:05:00.137 05:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:00.137 05:21:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3132225 00:05:00.137 05:21:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3132225 00:05:00.137 05:21:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.137 05:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3132225 ']' 00:05:00.137 05:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.137 05:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.137 05:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.137 05:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.137 05:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.137 [2024-11-27 05:21:56.641691] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:00.137 [2024-11-27 05:21:56.641789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132225 ] 00:05:00.397 [2024-11-27 05:21:56.793113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.397 [2024-11-27 05:21:56.888561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:01.337 05:21:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.337 [2024-11-27 05:21:57.728715] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:01.337 [2024-11-27 05:21:57.728807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132494 ] 00:05:01.337 [2024-11-27 05:21:57.879562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.598 [2024-11-27 05:21:57.980976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.598 [2024-11-27 05:21:57.981056] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:01.598 [2024-11-27 05:21:57.981077] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:01.598 [2024-11-27 05:21:57.981088] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3132225 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3132225 ']' 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3132225 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132225 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132225' 00:05:01.858 killing process with pid 3132225 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3132225 00:05:01.858 05:21:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3132225 00:05:04.397 00:05:04.397 real 0m3.936s 00:05:04.397 user 0m4.203s 00:05:04.397 sys 0m0.731s 00:05:04.397 05:22:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.397 05:22:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.397 ************************************ 00:05:04.397 END TEST exit_on_failed_rpc_init 00:05:04.397 ************************************ 00:05:04.397 05:22:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:04.397 00:05:04.397 real 0m22.718s 00:05:04.397 user 0m21.650s 00:05:04.397 sys 0m2.652s 00:05:04.397 05:22:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.397 05:22:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.397 ************************************ 00:05:04.397 END TEST skip_rpc 00:05:04.397 ************************************ 00:05:04.397 05:22:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:04.397 05:22:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.397 05:22:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.397 05:22:00 -- common/autotest_common.sh@10 -- # set +x 00:05:04.397 ************************************ 00:05:04.397 START TEST rpc_client 00:05:04.397 ************************************ 00:05:04.397 05:22:00 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:04.397 * Looking for test storage... 00:05:04.397 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:04.397 05:22:00 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.397 05:22:00 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.397 05:22:00 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.397 05:22:00 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.397 05:22:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:04.397 05:22:00 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.397 05:22:00 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.397 --rc genhtml_branch_coverage=1 00:05:04.397 --rc genhtml_function_coverage=1 00:05:04.397 --rc genhtml_legend=1 00:05:04.397 --rc geninfo_all_blocks=1 00:05:04.397 --rc geninfo_unexecuted_blocks=1 00:05:04.397 00:05:04.397 ' 00:05:04.397 05:22:00 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.397 --rc genhtml_branch_coverage=1 00:05:04.397 --rc genhtml_function_coverage=1 00:05:04.397 --rc genhtml_legend=1 00:05:04.397 --rc geninfo_all_blocks=1 00:05:04.397 --rc geninfo_unexecuted_blocks=1 00:05:04.397 00:05:04.397 ' 00:05:04.397 05:22:00 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.397 --rc genhtml_branch_coverage=1 00:05:04.397 --rc genhtml_function_coverage=1 00:05:04.397 --rc genhtml_legend=1 00:05:04.397 --rc geninfo_all_blocks=1 00:05:04.397 --rc geninfo_unexecuted_blocks=1 00:05:04.397 00:05:04.397 ' 00:05:04.397 05:22:00 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.397 --rc genhtml_branch_coverage=1 00:05:04.397 --rc genhtml_function_coverage=1 00:05:04.397 --rc genhtml_legend=1 00:05:04.397 --rc geninfo_all_blocks=1 00:05:04.397 --rc geninfo_unexecuted_blocks=1 00:05:04.397 00:05:04.398 ' 00:05:04.398 05:22:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:04.398 OK 00:05:04.398 05:22:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:04.398 00:05:04.398 real 0m0.215s 00:05:04.398 user 0m0.098s 00:05:04.398 sys 0m0.125s 00:05:04.398 05:22:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.398 05:22:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:04.398 ************************************ 00:05:04.398 END TEST rpc_client 00:05:04.398 ************************************ 00:05:04.398 05:22:00 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:04.398 05:22:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.398 05:22:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.398 05:22:00 -- common/autotest_common.sh@10 -- # set +x 00:05:04.398 ************************************ 00:05:04.398 START TEST json_config 00:05:04.398 ************************************ 00:05:04.398 05:22:00 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:04.398 05:22:00 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.398 05:22:00 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.398 05:22:00 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.658 05:22:01 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.658 05:22:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.658 05:22:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.658 05:22:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.658 05:22:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.658 05:22:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.658 05:22:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.658 05:22:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.658 05:22:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.658 05:22:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.658 05:22:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.658 05:22:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.658 05:22:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:04.658 05:22:01 json_config -- scripts/common.sh@345 -- # : 1 00:05:04.658 05:22:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.658 05:22:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.658 05:22:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:04.658 05:22:01 json_config -- scripts/common.sh@353 -- # local d=1 00:05:04.658 05:22:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.658 05:22:01 json_config -- scripts/common.sh@355 -- # echo 1 00:05:04.658 05:22:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.658 05:22:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:04.658 05:22:01 json_config -- scripts/common.sh@353 -- # local d=2 00:05:04.658 05:22:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.658 05:22:01 json_config -- scripts/common.sh@355 -- # echo 2 00:05:04.658 05:22:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.658 05:22:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.658 05:22:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.658 05:22:01 json_config -- scripts/common.sh@368 -- # return 0 00:05:04.658 05:22:01 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.658 05:22:01 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.658 --rc genhtml_branch_coverage=1 00:05:04.658 --rc genhtml_function_coverage=1 00:05:04.658 --rc genhtml_legend=1 00:05:04.658 --rc geninfo_all_blocks=1 00:05:04.658 --rc geninfo_unexecuted_blocks=1 00:05:04.658 00:05:04.658 ' 00:05:04.658 05:22:01 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.658 --rc genhtml_branch_coverage=1 00:05:04.658 --rc genhtml_function_coverage=1 00:05:04.658 --rc genhtml_legend=1 00:05:04.658 --rc geninfo_all_blocks=1 00:05:04.658 --rc geninfo_unexecuted_blocks=1 00:05:04.658 00:05:04.658 ' 00:05:04.658 05:22:01 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.658 --rc genhtml_branch_coverage=1 00:05:04.658 --rc genhtml_function_coverage=1 00:05:04.658 --rc genhtml_legend=1 00:05:04.658 --rc geninfo_all_blocks=1 00:05:04.658 --rc geninfo_unexecuted_blocks=1 00:05:04.658 00:05:04.658 ' 00:05:04.658 05:22:01 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.658 --rc genhtml_branch_coverage=1 00:05:04.658 --rc genhtml_function_coverage=1 00:05:04.658 --rc genhtml_legend=1 00:05:04.658 --rc geninfo_all_blocks=1 00:05:04.658 --rc geninfo_unexecuted_blocks=1 00:05:04.658 00:05:04.658 ' 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:04.658 05:22:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.658 05:22:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.658 05:22:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.658 05:22:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.658 05:22:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.658 05:22:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.658 05:22:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.658 05:22:01 json_config -- paths/export.sh@5 -- # export PATH 00:05:04.658 05:22:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@51 -- # : 0 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.658 05:22:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:04.658 05:22:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:04.659 05:22:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:04.659 05:22:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:04.659 05:22:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:04.659 05:22:01 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:04.659 05:22:01 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:04.659 INFO: JSON configuration test init 00:05:04.659 05:22:01 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:04.659 05:22:01 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:04.659 05:22:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.659 05:22:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.659 05:22:01 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:04.659 05:22:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.659 05:22:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.659 05:22:01 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:04.659 05:22:01 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.659 05:22:01 json_config -- json_config/common.sh@10 -- # shift 00:05:04.659 05:22:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.659 05:22:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.659 05:22:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.659 05:22:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.659 05:22:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.659 05:22:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3133165 00:05:04.659 05:22:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.659 Waiting for target to run... 00:05:04.659 05:22:01 json_config -- json_config/common.sh@25 -- # waitforlisten 3133165 /var/tmp/spdk_tgt.sock 00:05:04.659 05:22:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 3133165 ']' 00:05:04.659 05:22:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.659 05:22:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.659 05:22:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.659 05:22:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.659 05:22:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.659 05:22:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:04.659 [2024-11-27 05:22:01.170419] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:04.659 [2024-11-27 05:22:01.170520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133165 ] 00:05:05.225 [2024-11-27 05:22:01.683795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.225 [2024-11-27 05:22:01.792959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.484 05:22:01 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.484 05:22:01 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:05.484 05:22:01 json_config -- json_config/common.sh@26 -- # echo '' 00:05:05.484 00:05:05.484 05:22:01 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:05.484 05:22:01 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:05.484 05:22:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.484 05:22:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.484 05:22:01 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:05.484 05:22:01 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:05.484 05:22:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.484 05:22:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.484 05:22:01 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:05.484 05:22:01 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:05.484 05:22:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:09.678 05:22:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.678 05:22:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:09.678 05:22:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:09.678 05:22:05 json_config -- json_config/json_config.sh@54 -- # sort 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:09.679 05:22:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.679 05:22:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:09.679 05:22:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.679 05:22:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:05:09.679 05:22:05 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:05:09.679 05:22:05 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:09.679 05:22:05 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:09.679 05:22:05 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:09.679 05:22:05 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:09.679 05:22:05 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:09.679 05:22:05 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:09.679 05:22:05 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:09.679 05:22:05 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:09.679 05:22:05 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:05:09.679 05:22:05 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:09.679 05:22:05 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:05:09.679 05:22:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@320 -- # e810=() 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@321 -- # x722=() 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@322 -- # mlx=() 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:17.835 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:17.835 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:17.835 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:17.835 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@62 -- # uname 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:17.835 05:22:14 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:18.095 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:18.095 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:18.095 altname enp217s0f0np0 00:05:18.095 altname ens818f0np0 00:05:18.095 inet 192.168.100.8/24 scope global mlx_0_0 00:05:18.095 valid_lft forever preferred_lft forever 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:18.095 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:18.095 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:18.095 altname enp217s0f1np1 00:05:18.095 altname ens818f1np1 00:05:18.095 inet 192.168.100.9/24 scope global mlx_0_1 00:05:18.095 valid_lft forever preferred_lft forever 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@450 -- # return 0 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:18.095 192.168.100.9' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:18.095 192.168.100.9' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@485 -- # head -n 1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:18.095 192.168.100.9' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@486 -- # head -n 1 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:18.095 05:22:14 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:18.095 05:22:14 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:05:18.095 05:22:14 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.095 05:22:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.355 MallocForNvmf0 00:05:18.355 05:22:14 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.355 05:22:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.614 MallocForNvmf1 00:05:18.614 05:22:15 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:18.614 05:22:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:18.614 [2024-11-27 05:22:15.169826] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:18.873 [2024-11-27 05:22:15.202848] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029440/0x7f576025f940) succeed. 00:05:18.873 [2024-11-27 05:22:15.215379] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000295c0/0x7f576021b940) succeed. 00:05:18.873 05:22:15 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.873 05:22:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.873 05:22:15 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.873 05:22:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.132 05:22:15 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.132 05:22:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.391 05:22:15 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:19.391 05:22:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:19.650 [2024-11-27 05:22:15.993159] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:19.650 05:22:16 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:19.650 05:22:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.650 05:22:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.650 05:22:16 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:19.650 05:22:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.650 05:22:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.650 05:22:16 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:19.650 05:22:16 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.651 05:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.910 MallocBdevForConfigChangeCheck 00:05:19.910 05:22:16 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:19.910 05:22:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.910 05:22:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.910 05:22:16 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:19.910 05:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.170 05:22:16 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:20.170 INFO: shutting down applications... 00:05:20.170 05:22:16 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:20.170 05:22:16 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:20.170 05:22:16 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:20.170 05:22:16 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:22.711 Calling clear_iscsi_subsystem 00:05:22.711 Calling clear_nvmf_subsystem 00:05:22.711 Calling clear_nbd_subsystem 00:05:22.711 Calling clear_ublk_subsystem 00:05:22.711 Calling clear_vhost_blk_subsystem 00:05:22.711 Calling clear_vhost_scsi_subsystem 00:05:22.711 Calling clear_bdev_subsystem 00:05:22.711 05:22:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:22.711 05:22:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:22.711 05:22:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:22.711 05:22:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.711 05:22:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:22.711 05:22:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:22.971 05:22:19 json_config -- json_config/json_config.sh@352 -- # break 00:05:22.971 05:22:19 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:22.971 05:22:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:22.971 05:22:19 json_config -- json_config/common.sh@31 -- # local app=target 00:05:22.971 05:22:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.971 05:22:19 json_config -- json_config/common.sh@35 -- # [[ -n 3133165 ]] 00:05:22.971 05:22:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3133165 00:05:22.971 05:22:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.971 05:22:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.971 05:22:19 json_config -- json_config/common.sh@41 -- # kill -0 3133165 00:05:22.971 05:22:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.539 05:22:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.539 05:22:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.539 05:22:20 json_config -- json_config/common.sh@41 -- # kill -0 3133165 00:05:23.539 05:22:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.109 05:22:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.109 05:22:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.109 05:22:20 json_config -- json_config/common.sh@41 -- # kill -0 3133165 00:05:24.109 05:22:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.109 05:22:20 json_config -- json_config/common.sh@43 -- # break 00:05:24.109 05:22:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.109 05:22:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.109 SPDK target shutdown done 00:05:24.109 05:22:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:24.109 INFO: relaunching applications... 00:05:24.109 05:22:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.109 05:22:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:24.109 05:22:20 json_config -- json_config/common.sh@10 -- # shift 00:05:24.109 05:22:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.109 05:22:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.109 05:22:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.109 05:22:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.109 05:22:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.109 05:22:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3139271 00:05:24.109 05:22:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.109 Waiting for target to run... 00:05:24.109 05:22:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.109 05:22:20 json_config -- json_config/common.sh@25 -- # waitforlisten 3139271 /var/tmp/spdk_tgt.sock 00:05:24.109 05:22:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 3139271 ']' 00:05:24.109 05:22:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.109 05:22:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.109 05:22:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.109 05:22:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.109 05:22:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.109 [2024-11-27 05:22:20.636987] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:24.109 [2024-11-27 05:22:20.637085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139271 ] 00:05:24.678 [2024-11-27 05:22:21.148367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.678 [2024-11-27 05:22:21.260670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.874 [2024-11-27 05:22:24.905036] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029bc0/0x7f3698bbd940) succeed. 00:05:28.874 [2024-11-27 05:22:24.916488] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029d40/0x7f3698b79940) succeed. 00:05:28.874 [2024-11-27 05:22:24.977536] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:28.874 05:22:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.874 05:22:25 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:28.874 05:22:25 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.874 00:05:28.875 05:22:25 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:28.875 05:22:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:28.875 INFO: Checking if target configuration is the same... 00:05:28.875 05:22:25 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.875 05:22:25 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:28.875 05:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.875 + '[' 2 -ne 2 ']' 00:05:28.875 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:28.875 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:28.875 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:28.875 +++ basename /dev/fd/62 00:05:28.875 ++ mktemp /tmp/62.XXX 00:05:28.875 + tmp_file_1=/tmp/62.G8m 00:05:28.875 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.875 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:28.875 + tmp_file_2=/tmp/spdk_tgt_config.json.nQP 00:05:28.875 + ret=0 00:05:28.875 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.875 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.875 + diff -u /tmp/62.G8m /tmp/spdk_tgt_config.json.nQP 00:05:28.875 + echo 'INFO: JSON config files are the same' 00:05:28.875 INFO: JSON config files are the same 00:05:28.875 + rm /tmp/62.G8m /tmp/spdk_tgt_config.json.nQP 00:05:28.875 + exit 0 00:05:28.875 05:22:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:28.875 05:22:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:28.875 INFO: changing configuration and checking if this can be detected... 00:05:28.875 05:22:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:28.875 05:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.134 05:22:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:29.134 05:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.134 05:22:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.134 + '[' 2 -ne 2 ']' 00:05:29.134 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:29.134 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:29.135 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:29.135 +++ basename /dev/fd/62 00:05:29.135 ++ mktemp /tmp/62.XXX 00:05:29.135 + tmp_file_1=/tmp/62.sFZ 00:05:29.135 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.135 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:29.135 + tmp_file_2=/tmp/spdk_tgt_config.json.QrY 00:05:29.135 + ret=0 00:05:29.135 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.394 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.394 + diff -u /tmp/62.sFZ /tmp/spdk_tgt_config.json.QrY 00:05:29.394 + ret=1 00:05:29.394 + echo '=== Start of file: /tmp/62.sFZ ===' 00:05:29.394 + cat /tmp/62.sFZ 00:05:29.394 + echo '=== End of file: /tmp/62.sFZ ===' 00:05:29.394 + echo '' 00:05:29.394 + echo '=== Start of file: /tmp/spdk_tgt_config.json.QrY ===' 00:05:29.394 + cat /tmp/spdk_tgt_config.json.QrY 00:05:29.394 + echo '=== End of file: /tmp/spdk_tgt_config.json.QrY ===' 00:05:29.394 + echo '' 00:05:29.394 + rm /tmp/62.sFZ /tmp/spdk_tgt_config.json.QrY 00:05:29.394 + exit 1 00:05:29.394 05:22:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:29.394 INFO: configuration change detected. 00:05:29.394 05:22:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:29.394 05:22:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:29.394 05:22:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.394 05:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.394 05:22:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:29.394 05:22:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:29.394 05:22:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 3139271 ]] 00:05:29.394 05:22:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:29.394 05:22:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:29.394 05:22:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.394 05:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.394 05:22:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:29.394 05:22:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:29.653 05:22:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:29.653 05:22:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:29.653 05:22:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:29.653 05:22:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:29.653 05:22:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.653 05:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.653 05:22:26 json_config -- json_config/json_config.sh@330 -- # killprocess 3139271 00:05:29.653 05:22:26 json_config -- common/autotest_common.sh@954 -- # '[' -z 3139271 ']' 00:05:29.653 05:22:26 json_config -- common/autotest_common.sh@958 -- # kill -0 3139271 00:05:29.653 05:22:26 json_config -- common/autotest_common.sh@959 -- # uname 00:05:29.653 05:22:26 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.653 05:22:26 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3139271 00:05:29.653 05:22:26 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.653 05:22:26 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.653 05:22:26 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3139271' 00:05:29.653 killing process with pid 3139271 00:05:29.653 05:22:26 json_config -- common/autotest_common.sh@973 -- # kill 3139271 00:05:29.653 05:22:26 json_config -- common/autotest_common.sh@978 -- # wait 3139271 00:05:32.941 05:22:29 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.941 05:22:29 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:32.941 05:22:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.941 05:22:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.941 05:22:29 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:32.941 05:22:29 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:32.941 INFO: Success 00:05:32.941 05:22:29 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:32.941 05:22:29 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:32.941 05:22:29 json_config -- nvmf/common.sh@121 -- # sync 00:05:32.941 05:22:29 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:05:32.941 05:22:29 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:05:32.941 05:22:29 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:05:32.941 05:22:29 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:32.941 05:22:29 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:05:32.941 00:05:32.941 real 0m28.410s 00:05:32.941 user 0m30.776s 00:05:32.941 sys 0m9.900s 00:05:32.941 05:22:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.941 05:22:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.941 ************************************ 00:05:32.941 END TEST json_config 00:05:32.942 ************************************ 00:05:32.942 05:22:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:32.942 05:22:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.942 05:22:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.942 05:22:29 -- common/autotest_common.sh@10 -- # set +x 00:05:32.942 ************************************ 00:05:32.942 START TEST json_config_extra_key 00:05:32.942 ************************************ 00:05:32.942 05:22:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:32.942 05:22:29 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.942 05:22:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.942 05:22:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.942 05:22:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.942 05:22:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.942 05:22:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.942 05:22:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.942 05:22:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:33.201 05:22:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.201 05:22:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.201 --rc genhtml_branch_coverage=1 00:05:33.201 --rc genhtml_function_coverage=1 00:05:33.201 --rc genhtml_legend=1 00:05:33.201 --rc geninfo_all_blocks=1 00:05:33.201 --rc geninfo_unexecuted_blocks=1 00:05:33.201 00:05:33.201 ' 00:05:33.201 05:22:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.201 --rc genhtml_branch_coverage=1 00:05:33.201 --rc genhtml_function_coverage=1 00:05:33.201 --rc genhtml_legend=1 00:05:33.201 --rc geninfo_all_blocks=1 00:05:33.201 --rc geninfo_unexecuted_blocks=1 00:05:33.201 00:05:33.201 ' 00:05:33.201 05:22:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.201 --rc genhtml_branch_coverage=1 00:05:33.201 --rc genhtml_function_coverage=1 00:05:33.201 --rc genhtml_legend=1 00:05:33.201 --rc geninfo_all_blocks=1 00:05:33.201 --rc geninfo_unexecuted_blocks=1 00:05:33.201 00:05:33.201 ' 00:05:33.201 05:22:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.201 --rc genhtml_branch_coverage=1 00:05:33.201 --rc genhtml_function_coverage=1 00:05:33.201 --rc genhtml_legend=1 00:05:33.201 --rc geninfo_all_blocks=1 00:05:33.201 --rc geninfo_unexecuted_blocks=1 00:05:33.201 00:05:33.201 ' 00:05:33.201 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.201 05:22:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.201 05:22:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.201 05:22:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.201 05:22:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.201 05:22:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:33.201 05:22:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:33.201 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:33.201 05:22:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:33.201 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:33.201 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:33.201 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:33.201 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:33.201 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:33.201 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:33.201 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:33.201 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:33.202 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:33.202 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.202 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:33.202 INFO: launching applications... 00:05:33.202 05:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3141007 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.202 Waiting for target to run... 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3141007 /var/tmp/spdk_tgt.sock 00:05:33.202 05:22:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3141007 ']' 00:05:33.202 05:22:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.202 05:22:29 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:33.202 05:22:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.202 05:22:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.202 05:22:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.202 05:22:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.202 [2024-11-27 05:22:29.675770] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:33.202 [2024-11-27 05:22:29.675875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3141007 ] 00:05:33.770 [2024-11-27 05:22:30.201199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.770 [2024-11-27 05:22:30.314740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.338 05:22:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.338 05:22:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:34.338 05:22:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:34.338 00:05:34.338 05:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:34.338 INFO: shutting down applications... 00:05:34.338 05:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:34.338 05:22:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:34.338 05:22:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:34.338 05:22:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3141007 ]] 00:05:34.338 05:22:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3141007 00:05:34.338 05:22:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:34.338 05:22:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.597 05:22:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3141007 00:05:34.597 05:22:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.856 05:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.856 05:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.856 05:22:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3141007 00:05:34.856 05:22:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.424 05:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.424 05:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.424 05:22:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3141007 00:05:35.424 05:22:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.992 05:22:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.992 05:22:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.992 05:22:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3141007 00:05:35.992 05:22:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.560 05:22:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.560 05:22:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.560 05:22:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3141007 00:05:36.560 05:22:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.126 05:22:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.126 05:22:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.126 05:22:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3141007 00:05:37.126 05:22:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.126 05:22:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:37.126 05:22:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.126 05:22:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.126 SPDK target shutdown done 00:05:37.126 05:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:37.126 Success 00:05:37.126 00:05:37.126 real 0m4.071s 00:05:37.126 user 0m3.612s 00:05:37.126 sys 0m0.802s 00:05:37.126 05:22:33 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.126 05:22:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:37.126 ************************************ 00:05:37.126 END TEST json_config_extra_key 00:05:37.126 ************************************ 00:05:37.126 05:22:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.126 05:22:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.126 05:22:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.126 05:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:37.126 ************************************ 00:05:37.126 START TEST alias_rpc 00:05:37.126 ************************************ 00:05:37.126 05:22:33 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.126 * Looking for test storage... 00:05:37.126 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:37.126 05:22:33 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.126 05:22:33 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.126 05:22:33 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.127 05:22:33 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.127 05:22:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.127 05:22:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.385 05:22:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.385 --rc genhtml_branch_coverage=1 00:05:37.385 --rc genhtml_function_coverage=1 00:05:37.385 --rc genhtml_legend=1 00:05:37.385 --rc geninfo_all_blocks=1 00:05:37.385 --rc geninfo_unexecuted_blocks=1 00:05:37.385 00:05:37.385 ' 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.385 --rc genhtml_branch_coverage=1 00:05:37.385 --rc genhtml_function_coverage=1 00:05:37.385 --rc genhtml_legend=1 00:05:37.385 --rc geninfo_all_blocks=1 00:05:37.385 --rc geninfo_unexecuted_blocks=1 00:05:37.385 00:05:37.385 ' 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.385 --rc genhtml_branch_coverage=1 00:05:37.385 --rc genhtml_function_coverage=1 00:05:37.385 --rc genhtml_legend=1 00:05:37.385 --rc geninfo_all_blocks=1 00:05:37.385 --rc geninfo_unexecuted_blocks=1 00:05:37.385 00:05:37.385 ' 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.385 --rc genhtml_branch_coverage=1 00:05:37.385 --rc genhtml_function_coverage=1 00:05:37.385 --rc genhtml_legend=1 00:05:37.385 --rc geninfo_all_blocks=1 00:05:37.385 --rc geninfo_unexecuted_blocks=1 00:05:37.385 00:05:37.385 ' 00:05:37.385 05:22:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:37.385 05:22:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3141641 00:05:37.385 05:22:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3141641 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3141641 ']' 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.385 05:22:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.385 05:22:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.385 [2024-11-27 05:22:33.823568] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:37.385 [2024-11-27 05:22:33.823697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3141641 ] 00:05:37.647 [2024-11-27 05:22:33.976756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.647 [2024-11-27 05:22:34.072821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.296 05:22:34 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.296 05:22:34 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.296 05:22:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:38.585 05:22:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3141641 00:05:38.585 05:22:35 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3141641 ']' 00:05:38.585 05:22:35 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3141641 00:05:38.585 05:22:35 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.585 05:22:35 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.585 05:22:35 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3141641 00:05:38.585 05:22:35 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.585 05:22:35 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.585 05:22:35 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3141641' 00:05:38.585 killing process with pid 3141641 00:05:38.585 05:22:35 alias_rpc -- common/autotest_common.sh@973 -- # kill 3141641 00:05:38.585 05:22:35 alias_rpc -- common/autotest_common.sh@978 -- # wait 3141641 00:05:41.116 00:05:41.116 real 0m3.743s 00:05:41.116 user 0m3.733s 00:05:41.116 sys 0m0.612s 00:05:41.116 05:22:37 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.116 05:22:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.116 ************************************ 00:05:41.116 END TEST alias_rpc 00:05:41.116 ************************************ 00:05:41.116 05:22:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:41.116 05:22:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.116 05:22:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.116 05:22:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.116 05:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:41.116 ************************************ 00:05:41.116 START TEST spdkcli_tcp 00:05:41.116 ************************************ 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.116 * Looking for test storage... 00:05:41.116 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.116 05:22:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.116 --rc genhtml_branch_coverage=1 00:05:41.116 --rc genhtml_function_coverage=1 00:05:41.116 --rc genhtml_legend=1 00:05:41.116 --rc geninfo_all_blocks=1 00:05:41.116 --rc geninfo_unexecuted_blocks=1 00:05:41.116 00:05:41.116 ' 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.116 --rc genhtml_branch_coverage=1 00:05:41.116 --rc genhtml_function_coverage=1 00:05:41.116 --rc genhtml_legend=1 00:05:41.116 --rc geninfo_all_blocks=1 00:05:41.116 --rc geninfo_unexecuted_blocks=1 00:05:41.116 00:05:41.116 ' 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.116 --rc genhtml_branch_coverage=1 00:05:41.116 --rc genhtml_function_coverage=1 00:05:41.116 --rc genhtml_legend=1 00:05:41.116 --rc geninfo_all_blocks=1 00:05:41.116 --rc geninfo_unexecuted_blocks=1 00:05:41.116 00:05:41.116 ' 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.116 --rc genhtml_branch_coverage=1 00:05:41.116 --rc genhtml_function_coverage=1 00:05:41.116 --rc genhtml_legend=1 00:05:41.116 --rc geninfo_all_blocks=1 00:05:41.116 --rc geninfo_unexecuted_blocks=1 00:05:41.116 00:05:41.116 ' 00:05:41.116 05:22:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:41.116 05:22:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:41.116 05:22:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:41.116 05:22:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:41.116 05:22:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:41.116 05:22:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:41.116 05:22:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.116 05:22:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3142484 00:05:41.116 05:22:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3142484 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3142484 ']' 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.116 05:22:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.117 05:22:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:41.117 [2024-11-27 05:22:37.654973] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:41.117 [2024-11-27 05:22:37.655069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3142484 ] 00:05:41.375 [2024-11-27 05:22:37.808302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.375 [2024-11-27 05:22:37.905548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.375 [2024-11-27 05:22:37.905558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.312 05:22:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.312 05:22:38 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:42.312 05:22:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3142669 00:05:42.312 05:22:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.312 05:22:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.312 [ 00:05:42.312 "bdev_malloc_delete", 00:05:42.312 "bdev_malloc_create", 00:05:42.312 "bdev_null_resize", 00:05:42.312 "bdev_null_delete", 00:05:42.312 "bdev_null_create", 00:05:42.312 "bdev_nvme_cuse_unregister", 00:05:42.312 "bdev_nvme_cuse_register", 00:05:42.312 "bdev_opal_new_user", 00:05:42.312 "bdev_opal_set_lock_state", 00:05:42.312 "bdev_opal_delete", 00:05:42.312 "bdev_opal_get_info", 00:05:42.312 "bdev_opal_create", 00:05:42.312 "bdev_nvme_opal_revert", 00:05:42.312 "bdev_nvme_opal_init", 00:05:42.312 "bdev_nvme_send_cmd", 00:05:42.312 "bdev_nvme_set_keys", 00:05:42.312 "bdev_nvme_get_path_iostat", 00:05:42.312 "bdev_nvme_get_mdns_discovery_info", 00:05:42.312 "bdev_nvme_stop_mdns_discovery", 00:05:42.312 "bdev_nvme_start_mdns_discovery", 00:05:42.312 "bdev_nvme_set_multipath_policy", 00:05:42.312 "bdev_nvme_set_preferred_path", 00:05:42.312 "bdev_nvme_get_io_paths", 00:05:42.312 "bdev_nvme_remove_error_injection", 00:05:42.312 "bdev_nvme_add_error_injection", 00:05:42.312 "bdev_nvme_get_discovery_info", 00:05:42.312 "bdev_nvme_stop_discovery", 00:05:42.312 "bdev_nvme_start_discovery", 00:05:42.312 "bdev_nvme_get_controller_health_info", 00:05:42.312 "bdev_nvme_disable_controller", 00:05:42.312 "bdev_nvme_enable_controller", 00:05:42.312 "bdev_nvme_reset_controller", 00:05:42.312 "bdev_nvme_get_transport_statistics", 00:05:42.312 "bdev_nvme_apply_firmware", 00:05:42.312 "bdev_nvme_detach_controller", 00:05:42.312 "bdev_nvme_get_controllers", 00:05:42.312 "bdev_nvme_attach_controller", 00:05:42.312 "bdev_nvme_set_hotplug", 00:05:42.312 "bdev_nvme_set_options", 00:05:42.312 "bdev_passthru_delete", 00:05:42.312 "bdev_passthru_create", 00:05:42.312 "bdev_lvol_set_parent_bdev", 00:05:42.312 "bdev_lvol_set_parent", 00:05:42.312 "bdev_lvol_check_shallow_copy", 00:05:42.312 "bdev_lvol_start_shallow_copy", 00:05:42.312 "bdev_lvol_grow_lvstore", 00:05:42.312 "bdev_lvol_get_lvols", 00:05:42.312 "bdev_lvol_get_lvstores", 00:05:42.312 "bdev_lvol_delete", 00:05:42.312 "bdev_lvol_set_read_only", 00:05:42.312 "bdev_lvol_resize", 00:05:42.312 "bdev_lvol_decouple_parent", 00:05:42.312 "bdev_lvol_inflate", 00:05:42.312 "bdev_lvol_rename", 00:05:42.312 "bdev_lvol_clone_bdev", 00:05:42.312 "bdev_lvol_clone", 00:05:42.312 "bdev_lvol_snapshot", 00:05:42.312 "bdev_lvol_create", 00:05:42.312 "bdev_lvol_delete_lvstore", 00:05:42.312 "bdev_lvol_rename_lvstore", 00:05:42.312 "bdev_lvol_create_lvstore", 00:05:42.312 "bdev_raid_set_options", 00:05:42.312 "bdev_raid_remove_base_bdev", 00:05:42.312 "bdev_raid_add_base_bdev", 00:05:42.312 "bdev_raid_delete", 00:05:42.312 "bdev_raid_create", 00:05:42.312 "bdev_raid_get_bdevs", 00:05:42.312 "bdev_error_inject_error", 00:05:42.312 "bdev_error_delete", 00:05:42.312 "bdev_error_create", 00:05:42.312 "bdev_split_delete", 00:05:42.312 "bdev_split_create", 00:05:42.312 "bdev_delay_delete", 00:05:42.312 "bdev_delay_create", 00:05:42.312 "bdev_delay_update_latency", 00:05:42.312 "bdev_zone_block_delete", 00:05:42.312 "bdev_zone_block_create", 00:05:42.312 "blobfs_create", 00:05:42.312 "blobfs_detect", 00:05:42.312 "blobfs_set_cache_size", 00:05:42.312 "bdev_aio_delete", 00:05:42.312 "bdev_aio_rescan", 00:05:42.312 "bdev_aio_create", 00:05:42.312 "bdev_ftl_set_property", 00:05:42.312 "bdev_ftl_get_properties", 00:05:42.312 "bdev_ftl_get_stats", 00:05:42.312 "bdev_ftl_unmap", 00:05:42.312 "bdev_ftl_unload", 00:05:42.312 "bdev_ftl_delete", 00:05:42.312 "bdev_ftl_load", 00:05:42.312 "bdev_ftl_create", 00:05:42.312 "bdev_virtio_attach_controller", 00:05:42.312 "bdev_virtio_scsi_get_devices", 00:05:42.312 "bdev_virtio_detach_controller", 00:05:42.312 "bdev_virtio_blk_set_hotplug", 00:05:42.312 "bdev_iscsi_delete", 00:05:42.312 "bdev_iscsi_create", 00:05:42.312 "bdev_iscsi_set_options", 00:05:42.312 "accel_error_inject_error", 00:05:42.312 "ioat_scan_accel_module", 00:05:42.312 "dsa_scan_accel_module", 00:05:42.312 "iaa_scan_accel_module", 00:05:42.312 "keyring_file_remove_key", 00:05:42.312 "keyring_file_add_key", 00:05:42.312 "keyring_linux_set_options", 00:05:42.312 "fsdev_aio_delete", 00:05:42.312 "fsdev_aio_create", 00:05:42.312 "iscsi_get_histogram", 00:05:42.312 "iscsi_enable_histogram", 00:05:42.312 "iscsi_set_options", 00:05:42.312 "iscsi_get_auth_groups", 00:05:42.312 "iscsi_auth_group_remove_secret", 00:05:42.312 "iscsi_auth_group_add_secret", 00:05:42.312 "iscsi_delete_auth_group", 00:05:42.312 "iscsi_create_auth_group", 00:05:42.312 "iscsi_set_discovery_auth", 00:05:42.312 "iscsi_get_options", 00:05:42.312 "iscsi_target_node_request_logout", 00:05:42.312 "iscsi_target_node_set_redirect", 00:05:42.312 "iscsi_target_node_set_auth", 00:05:42.312 "iscsi_target_node_add_lun", 00:05:42.312 "iscsi_get_stats", 00:05:42.312 "iscsi_get_connections", 00:05:42.312 "iscsi_portal_group_set_auth", 00:05:42.312 "iscsi_start_portal_group", 00:05:42.312 "iscsi_delete_portal_group", 00:05:42.312 "iscsi_create_portal_group", 00:05:42.312 "iscsi_get_portal_groups", 00:05:42.312 "iscsi_delete_target_node", 00:05:42.312 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.312 "iscsi_target_node_add_pg_ig_maps", 00:05:42.312 "iscsi_create_target_node", 00:05:42.312 "iscsi_get_target_nodes", 00:05:42.312 "iscsi_delete_initiator_group", 00:05:42.312 "iscsi_initiator_group_remove_initiators", 00:05:42.312 "iscsi_initiator_group_add_initiators", 00:05:42.312 "iscsi_create_initiator_group", 00:05:42.312 "iscsi_get_initiator_groups", 00:05:42.312 "nvmf_set_crdt", 00:05:42.312 "nvmf_set_config", 00:05:42.312 "nvmf_set_max_subsystems", 00:05:42.312 "nvmf_stop_mdns_prr", 00:05:42.312 "nvmf_publish_mdns_prr", 00:05:42.312 "nvmf_subsystem_get_listeners", 00:05:42.312 "nvmf_subsystem_get_qpairs", 00:05:42.312 "nvmf_subsystem_get_controllers", 00:05:42.312 "nvmf_get_stats", 00:05:42.312 "nvmf_get_transports", 00:05:42.312 "nvmf_create_transport", 00:05:42.312 "nvmf_get_targets", 00:05:42.312 "nvmf_delete_target", 00:05:42.312 "nvmf_create_target", 00:05:42.312 "nvmf_subsystem_allow_any_host", 00:05:42.312 "nvmf_subsystem_set_keys", 00:05:42.312 "nvmf_subsystem_remove_host", 00:05:42.312 "nvmf_subsystem_add_host", 00:05:42.312 "nvmf_ns_remove_host", 00:05:42.312 "nvmf_ns_add_host", 00:05:42.312 "nvmf_subsystem_remove_ns", 00:05:42.312 "nvmf_subsystem_set_ns_ana_group", 00:05:42.312 "nvmf_subsystem_add_ns", 00:05:42.312 "nvmf_subsystem_listener_set_ana_state", 00:05:42.312 "nvmf_discovery_get_referrals", 00:05:42.312 "nvmf_discovery_remove_referral", 00:05:42.312 "nvmf_discovery_add_referral", 00:05:42.312 "nvmf_subsystem_remove_listener", 00:05:42.312 "nvmf_subsystem_add_listener", 00:05:42.312 "nvmf_delete_subsystem", 00:05:42.312 "nvmf_create_subsystem", 00:05:42.312 "nvmf_get_subsystems", 00:05:42.312 "env_dpdk_get_mem_stats", 00:05:42.312 "nbd_get_disks", 00:05:42.312 "nbd_stop_disk", 00:05:42.312 "nbd_start_disk", 00:05:42.313 "ublk_recover_disk", 00:05:42.313 "ublk_get_disks", 00:05:42.313 "ublk_stop_disk", 00:05:42.313 "ublk_start_disk", 00:05:42.313 "ublk_destroy_target", 00:05:42.313 "ublk_create_target", 00:05:42.313 "virtio_blk_create_transport", 00:05:42.313 "virtio_blk_get_transports", 00:05:42.313 "vhost_controller_set_coalescing", 00:05:42.313 "vhost_get_controllers", 00:05:42.313 "vhost_delete_controller", 00:05:42.313 "vhost_create_blk_controller", 00:05:42.313 "vhost_scsi_controller_remove_target", 00:05:42.313 "vhost_scsi_controller_add_target", 00:05:42.313 "vhost_start_scsi_controller", 00:05:42.313 "vhost_create_scsi_controller", 00:05:42.313 "thread_set_cpumask", 00:05:42.313 "scheduler_set_options", 00:05:42.313 "framework_get_governor", 00:05:42.313 "framework_get_scheduler", 00:05:42.313 "framework_set_scheduler", 00:05:42.313 "framework_get_reactors", 00:05:42.313 "thread_get_io_channels", 00:05:42.313 "thread_get_pollers", 00:05:42.313 "thread_get_stats", 00:05:42.313 "framework_monitor_context_switch", 00:05:42.313 "spdk_kill_instance", 00:05:42.313 "log_enable_timestamps", 00:05:42.313 "log_get_flags", 00:05:42.313 "log_clear_flag", 00:05:42.313 "log_set_flag", 00:05:42.313 "log_get_level", 00:05:42.313 "log_set_level", 00:05:42.313 "log_get_print_level", 00:05:42.313 "log_set_print_level", 00:05:42.313 "framework_enable_cpumask_locks", 00:05:42.313 "framework_disable_cpumask_locks", 00:05:42.313 "framework_wait_init", 00:05:42.313 "framework_start_init", 00:05:42.313 "scsi_get_devices", 00:05:42.313 "bdev_get_histogram", 00:05:42.313 "bdev_enable_histogram", 00:05:42.313 "bdev_set_qos_limit", 00:05:42.313 "bdev_set_qd_sampling_period", 00:05:42.313 "bdev_get_bdevs", 00:05:42.313 "bdev_reset_iostat", 00:05:42.313 "bdev_get_iostat", 00:05:42.313 "bdev_examine", 00:05:42.313 "bdev_wait_for_examine", 00:05:42.313 "bdev_set_options", 00:05:42.313 "accel_get_stats", 00:05:42.313 "accel_set_options", 00:05:42.313 "accel_set_driver", 00:05:42.313 "accel_crypto_key_destroy", 00:05:42.313 "accel_crypto_keys_get", 00:05:42.313 "accel_crypto_key_create", 00:05:42.313 "accel_assign_opc", 00:05:42.313 "accel_get_module_info", 00:05:42.313 "accel_get_opc_assignments", 00:05:42.313 "vmd_rescan", 00:05:42.313 "vmd_remove_device", 00:05:42.313 "vmd_enable", 00:05:42.313 "sock_get_default_impl", 00:05:42.313 "sock_set_default_impl", 00:05:42.313 "sock_impl_set_options", 00:05:42.313 "sock_impl_get_options", 00:05:42.313 "iobuf_get_stats", 00:05:42.313 "iobuf_set_options", 00:05:42.313 "keyring_get_keys", 00:05:42.313 "framework_get_pci_devices", 00:05:42.313 "framework_get_config", 00:05:42.313 "framework_get_subsystems", 00:05:42.313 "fsdev_set_opts", 00:05:42.313 "fsdev_get_opts", 00:05:42.313 "trace_get_info", 00:05:42.313 "trace_get_tpoint_group_mask", 00:05:42.313 "trace_disable_tpoint_group", 00:05:42.313 "trace_enable_tpoint_group", 00:05:42.313 "trace_clear_tpoint_mask", 00:05:42.313 "trace_set_tpoint_mask", 00:05:42.313 "notify_get_notifications", 00:05:42.313 "notify_get_types", 00:05:42.313 "spdk_get_version", 00:05:42.313 "rpc_get_methods" 00:05:42.313 ] 00:05:42.313 05:22:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.313 05:22:38 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.313 05:22:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.313 05:22:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.313 05:22:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3142484 00:05:42.313 05:22:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3142484 ']' 00:05:42.313 05:22:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3142484 00:05:42.313 05:22:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:42.572 05:22:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.572 05:22:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3142484 00:05:42.572 05:22:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.572 05:22:38 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.572 05:22:38 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3142484' 00:05:42.572 killing process with pid 3142484 00:05:42.572 05:22:38 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3142484 00:05:42.572 05:22:38 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3142484 00:05:45.108 00:05:45.108 real 0m3.866s 00:05:45.108 user 0m6.898s 00:05:45.108 sys 0m0.706s 00:05:45.108 05:22:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.108 05:22:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.108 ************************************ 00:05:45.108 END TEST spdkcli_tcp 00:05:45.108 ************************************ 00:05:45.108 05:22:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.108 05:22:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.108 05:22:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.108 05:22:41 -- common/autotest_common.sh@10 -- # set +x 00:05:45.108 ************************************ 00:05:45.108 START TEST dpdk_mem_utility 00:05:45.108 ************************************ 00:05:45.108 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.108 * Looking for test storage... 00:05:45.108 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:45.108 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.108 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.108 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.108 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.108 05:22:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:45.108 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.108 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.108 --rc genhtml_branch_coverage=1 00:05:45.108 --rc genhtml_function_coverage=1 00:05:45.108 --rc genhtml_legend=1 00:05:45.108 --rc geninfo_all_blocks=1 00:05:45.108 --rc geninfo_unexecuted_blocks=1 00:05:45.108 00:05:45.108 ' 00:05:45.108 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.108 --rc genhtml_branch_coverage=1 00:05:45.108 --rc genhtml_function_coverage=1 00:05:45.108 --rc genhtml_legend=1 00:05:45.108 --rc geninfo_all_blocks=1 00:05:45.108 --rc geninfo_unexecuted_blocks=1 00:05:45.108 00:05:45.108 ' 00:05:45.108 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.108 --rc genhtml_branch_coverage=1 00:05:45.108 --rc genhtml_function_coverage=1 00:05:45.108 --rc genhtml_legend=1 00:05:45.108 --rc geninfo_all_blocks=1 00:05:45.108 --rc geninfo_unexecuted_blocks=1 00:05:45.108 00:05:45.108 ' 00:05:45.109 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.109 --rc genhtml_branch_coverage=1 00:05:45.109 --rc genhtml_function_coverage=1 00:05:45.109 --rc genhtml_legend=1 00:05:45.109 --rc geninfo_all_blocks=1 00:05:45.109 --rc geninfo_unexecuted_blocks=1 00:05:45.109 00:05:45.109 ' 00:05:45.109 05:22:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.109 05:22:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3143168 00:05:45.109 05:22:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3143168 00:05:45.109 05:22:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.109 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3143168 ']' 00:05:45.109 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.109 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.109 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.109 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.109 05:22:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.109 [2024-11-27 05:22:41.587292] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:45.109 [2024-11-27 05:22:41.587394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143168 ] 00:05:45.368 [2024-11-27 05:22:41.742812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.368 [2024-11-27 05:22:41.838266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.305 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.305 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:46.305 05:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.305 05:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.305 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.305 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.305 { 00:05:46.306 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.306 } 00:05:46.306 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.306 05:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.306 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:46.306 1 heaps totaling size 824.000000 MiB 00:05:46.306 size: 824.000000 MiB heap id: 0 00:05:46.306 end heaps---------- 00:05:46.306 9 mempools totaling size 603.782043 MiB 00:05:46.306 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.306 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.306 size: 100.555481 MiB name: bdev_io_3143168 00:05:46.306 size: 50.003479 MiB name: msgpool_3143168 00:05:46.306 size: 36.509338 MiB name: fsdev_io_3143168 00:05:46.306 size: 21.763794 MiB name: PDU_Pool 00:05:46.306 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.306 size: 4.133484 MiB name: evtpool_3143168 00:05:46.306 size: 0.026123 MiB name: Session_Pool 00:05:46.306 end mempools------- 00:05:46.306 6 memzones totaling size 4.142822 MiB 00:05:46.306 size: 1.000366 MiB name: RG_ring_0_3143168 00:05:46.306 size: 1.000366 MiB name: RG_ring_1_3143168 00:05:46.306 size: 1.000366 MiB name: RG_ring_4_3143168 00:05:46.306 size: 1.000366 MiB name: RG_ring_5_3143168 00:05:46.306 size: 0.125366 MiB name: RG_ring_2_3143168 00:05:46.306 size: 0.015991 MiB name: RG_ring_3_3143168 00:05:46.306 end memzones------- 00:05:46.306 05:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.306 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:46.306 list of free elements. size: 16.847595 MiB 00:05:46.306 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:46.306 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:46.306 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:46.306 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:46.306 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:46.306 element at address: 0x200019a00000 with size: 0.999329 MiB 00:05:46.306 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:46.306 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:46.306 element at address: 0x200019200000 with size: 0.959900 MiB 00:05:46.306 element at address: 0x200019d00040 with size: 0.937256 MiB 00:05:46.306 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:46.306 element at address: 0x20001b400000 with size: 0.583191 MiB 00:05:46.306 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:46.306 element at address: 0x200019600000 with size: 0.491150 MiB 00:05:46.306 element at address: 0x200019e00000 with size: 0.485657 MiB 00:05:46.306 element at address: 0x200012c00000 with size: 0.436157 MiB 00:05:46.306 element at address: 0x200028800000 with size: 0.411072 MiB 00:05:46.306 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:46.306 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:46.306 list of standard malloc elements. size: 199.221497 MiB 00:05:46.306 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:46.306 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:46.306 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:46.306 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:46.306 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:46.306 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:46.306 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:46.306 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:46.306 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:46.306 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:46.306 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:46.306 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:46.306 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:46.306 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:46.306 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:46.306 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:46.306 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:46.306 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:46.306 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:46.306 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:46.306 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:46.306 list of memzone associated elements. size: 607.930908 MiB 00:05:46.306 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:46.306 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.306 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:46.306 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.306 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:46.306 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3143168_0 00:05:46.306 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:46.306 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3143168_0 00:05:46.306 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:46.306 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3143168_0 00:05:46.306 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:46.306 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.306 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:46.306 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.306 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:46.306 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3143168_0 00:05:46.306 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:46.306 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3143168 00:05:46.306 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:46.306 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3143168 00:05:46.306 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:46.306 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.306 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:46.306 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.306 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:46.306 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.306 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:46.306 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.306 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:46.306 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3143168 00:05:46.306 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:46.306 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3143168 00:05:46.306 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:46.306 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3143168 00:05:46.306 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:46.306 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3143168 00:05:46.306 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:46.306 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3143168 00:05:46.306 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:46.306 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3143168 00:05:46.306 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:05:46.306 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.306 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:05:46.306 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.306 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:05:46.306 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.306 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:46.306 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3143168 00:05:46.306 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:46.306 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3143168 00:05:46.306 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:05:46.306 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.306 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:05:46.306 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.306 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:46.306 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3143168 00:05:46.306 element at address: 0x20002886f540 with size: 0.002502 MiB 00:05:46.307 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.307 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:46.307 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3143168 00:05:46.307 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:46.307 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3143168 00:05:46.307 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:46.307 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3143168 00:05:46.307 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:46.307 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.307 05:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.307 05:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3143168 00:05:46.307 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3143168 ']' 00:05:46.307 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3143168 00:05:46.307 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:46.307 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.307 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143168 00:05:46.307 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.307 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.307 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143168' 00:05:46.307 killing process with pid 3143168 00:05:46.307 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3143168 00:05:46.307 05:22:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3143168 00:05:48.841 00:05:48.841 real 0m3.631s 00:05:48.841 user 0m3.486s 00:05:48.841 sys 0m0.666s 00:05:48.841 05:22:44 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.841 05:22:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.841 ************************************ 00:05:48.841 END TEST dpdk_mem_utility 00:05:48.841 ************************************ 00:05:48.841 05:22:44 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:48.841 05:22:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.841 05:22:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.841 05:22:44 -- common/autotest_common.sh@10 -- # set +x 00:05:48.841 ************************************ 00:05:48.841 START TEST event 00:05:48.841 ************************************ 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:48.841 * Looking for test storage... 00:05:48.841 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.841 05:22:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.841 05:22:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.841 05:22:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.841 05:22:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.841 05:22:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.841 05:22:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.841 05:22:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.841 05:22:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.841 05:22:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.841 05:22:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.841 05:22:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.841 05:22:45 event -- scripts/common.sh@344 -- # case "$op" in 00:05:48.841 05:22:45 event -- scripts/common.sh@345 -- # : 1 00:05:48.841 05:22:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.841 05:22:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.841 05:22:45 event -- scripts/common.sh@365 -- # decimal 1 00:05:48.841 05:22:45 event -- scripts/common.sh@353 -- # local d=1 00:05:48.841 05:22:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.841 05:22:45 event -- scripts/common.sh@355 -- # echo 1 00:05:48.841 05:22:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.841 05:22:45 event -- scripts/common.sh@366 -- # decimal 2 00:05:48.841 05:22:45 event -- scripts/common.sh@353 -- # local d=2 00:05:48.841 05:22:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.841 05:22:45 event -- scripts/common.sh@355 -- # echo 2 00:05:48.841 05:22:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.841 05:22:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.841 05:22:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.841 05:22:45 event -- scripts/common.sh@368 -- # return 0 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.841 --rc genhtml_branch_coverage=1 00:05:48.841 --rc genhtml_function_coverage=1 00:05:48.841 --rc genhtml_legend=1 00:05:48.841 --rc geninfo_all_blocks=1 00:05:48.841 --rc geninfo_unexecuted_blocks=1 00:05:48.841 00:05:48.841 ' 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.841 --rc genhtml_branch_coverage=1 00:05:48.841 --rc genhtml_function_coverage=1 00:05:48.841 --rc genhtml_legend=1 00:05:48.841 --rc geninfo_all_blocks=1 00:05:48.841 --rc geninfo_unexecuted_blocks=1 00:05:48.841 00:05:48.841 ' 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.841 --rc genhtml_branch_coverage=1 00:05:48.841 --rc genhtml_function_coverage=1 00:05:48.841 --rc genhtml_legend=1 00:05:48.841 --rc geninfo_all_blocks=1 00:05:48.841 --rc geninfo_unexecuted_blocks=1 00:05:48.841 00:05:48.841 ' 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.841 --rc genhtml_branch_coverage=1 00:05:48.841 --rc genhtml_function_coverage=1 00:05:48.841 --rc genhtml_legend=1 00:05:48.841 --rc geninfo_all_blocks=1 00:05:48.841 --rc geninfo_unexecuted_blocks=1 00:05:48.841 00:05:48.841 ' 00:05:48.841 05:22:45 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:48.841 05:22:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.841 05:22:45 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:48.841 05:22:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.841 05:22:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.841 ************************************ 00:05:48.841 START TEST event_perf 00:05:48.841 ************************************ 00:05:48.841 05:22:45 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.841 Running I/O for 1 seconds...[2024-11-27 05:22:45.275641] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:48.841 [2024-11-27 05:22:45.275718] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143965 ] 00:05:48.841 [2024-11-27 05:22:45.424893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.100 [2024-11-27 05:22:45.525313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.100 [2024-11-27 05:22:45.525386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.100 [2024-11-27 05:22:45.525448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.100 [2024-11-27 05:22:45.525459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.477 Running I/O for 1 seconds... 00:05:50.477 lcore 0: 216530 00:05:50.477 lcore 1: 216530 00:05:50.477 lcore 2: 216531 00:05:50.477 lcore 3: 216530 00:05:50.477 done. 00:05:50.477 00:05:50.477 real 0m1.509s 00:05:50.477 user 0m4.333s 00:05:50.477 sys 0m0.172s 00:05:50.477 05:22:46 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.477 05:22:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.477 ************************************ 00:05:50.477 END TEST event_perf 00:05:50.477 ************************************ 00:05:50.477 05:22:46 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.477 05:22:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:50.477 05:22:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.477 05:22:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.477 ************************************ 00:05:50.477 START TEST event_reactor 00:05:50.477 ************************************ 00:05:50.477 05:22:46 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.477 [2024-11-27 05:22:46.872566] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:50.477 [2024-11-27 05:22:46.872652] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144252 ] 00:05:50.477 [2024-11-27 05:22:47.033682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.735 [2024-11-27 05:22:47.129913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.112 test_start 00:05:52.112 oneshot 00:05:52.112 tick 100 00:05:52.112 tick 100 00:05:52.112 tick 250 00:05:52.112 tick 100 00:05:52.112 tick 100 00:05:52.112 tick 250 00:05:52.112 tick 100 00:05:52.112 tick 500 00:05:52.112 tick 100 00:05:52.112 tick 100 00:05:52.112 tick 250 00:05:52.112 tick 100 00:05:52.112 tick 100 00:05:52.112 test_end 00:05:52.112 00:05:52.112 real 0m1.512s 00:05:52.112 user 0m1.332s 00:05:52.112 sys 0m0.173s 00:05:52.112 05:22:48 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.112 05:22:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.112 ************************************ 00:05:52.112 END TEST event_reactor 00:05:52.112 ************************************ 00:05:52.112 05:22:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.112 05:22:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:52.112 05:22:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.112 05:22:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.112 ************************************ 00:05:52.112 START TEST event_reactor_perf 00:05:52.112 ************************************ 00:05:52.112 05:22:48 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.112 [2024-11-27 05:22:48.455824] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:52.112 [2024-11-27 05:22:48.455912] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144548 ] 00:05:52.112 [2024-11-27 05:22:48.605302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.370 [2024-11-27 05:22:48.704156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.307 test_start 00:05:53.307 test_end 00:05:53.307 Performance: 408712 events per second 00:05:53.566 00:05:53.566 real 0m1.492s 00:05:53.566 user 0m1.320s 00:05:53.566 sys 0m0.166s 00:05:53.566 05:22:49 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.566 05:22:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.566 ************************************ 00:05:53.566 END TEST event_reactor_perf 00:05:53.566 ************************************ 00:05:53.566 05:22:49 event -- event/event.sh@49 -- # uname -s 00:05:53.566 05:22:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.566 05:22:49 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.566 05:22:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.566 05:22:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.566 05:22:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.566 ************************************ 00:05:53.566 START TEST event_scheduler 00:05:53.566 ************************************ 00:05:53.566 05:22:49 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.566 * Looking for test storage... 00:05:53.566 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:53.566 05:22:50 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.566 05:22:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.566 05:22:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.566 05:22:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.566 05:22:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.567 05:22:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:53.826 05:22:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.826 05:22:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.826 05:22:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.826 05:22:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.826 --rc genhtml_branch_coverage=1 00:05:53.826 --rc genhtml_function_coverage=1 00:05:53.826 --rc genhtml_legend=1 00:05:53.826 --rc geninfo_all_blocks=1 00:05:53.826 --rc geninfo_unexecuted_blocks=1 00:05:53.826 00:05:53.826 ' 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.826 --rc genhtml_branch_coverage=1 00:05:53.826 --rc genhtml_function_coverage=1 00:05:53.826 --rc genhtml_legend=1 00:05:53.826 --rc geninfo_all_blocks=1 00:05:53.826 --rc geninfo_unexecuted_blocks=1 00:05:53.826 00:05:53.826 ' 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.826 --rc genhtml_branch_coverage=1 00:05:53.826 --rc genhtml_function_coverage=1 00:05:53.826 --rc genhtml_legend=1 00:05:53.826 --rc geninfo_all_blocks=1 00:05:53.826 --rc geninfo_unexecuted_blocks=1 00:05:53.826 00:05:53.826 ' 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.826 --rc genhtml_branch_coverage=1 00:05:53.826 --rc genhtml_function_coverage=1 00:05:53.826 --rc genhtml_legend=1 00:05:53.826 --rc geninfo_all_blocks=1 00:05:53.826 --rc geninfo_unexecuted_blocks=1 00:05:53.826 00:05:53.826 ' 00:05:53.826 05:22:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.826 05:22:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3144866 00:05:53.826 05:22:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.826 05:22:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3144866 00:05:53.826 05:22:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3144866 ']' 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.826 05:22:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.826 [2024-11-27 05:22:50.223124] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:53.826 [2024-11-27 05:22:50.223239] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144866 ] 00:05:53.826 [2024-11-27 05:22:50.375384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.085 [2024-11-27 05:22:50.481701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.086 [2024-11-27 05:22:50.481767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.086 [2024-11-27 05:22:50.481825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.086 [2024-11-27 05:22:50.481842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.653 05:22:51 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.653 05:22:51 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:54.653 05:22:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.653 05:22:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.653 05:22:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.653 [2024-11-27 05:22:51.060258] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:54.653 [2024-11-27 05:22:51.060284] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:54.653 [2024-11-27 05:22:51.060303] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:54.653 [2024-11-27 05:22:51.060314] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:54.653 [2024-11-27 05:22:51.060326] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:54.653 05:22:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.653 05:22:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.653 05:22:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.653 05:22:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 [2024-11-27 05:22:51.337364] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.912 05:22:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.912 05:22:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.912 05:22:51 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 ************************************ 00:05:54.912 START TEST scheduler_create_thread 00:05:54.912 ************************************ 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 2 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 3 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 4 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 5 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 6 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 7 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 8 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 9 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 10 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.912 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.479 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.479 05:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:55.479 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.479 05:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.857 05:22:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.857 05:22:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:56.857 05:22:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:56.857 05:22:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.857 05:22:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.234 05:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.234 00:05:58.234 real 0m3.056s 00:05:58.234 user 0m0.024s 00:05:58.234 sys 0m0.007s 00:05:58.234 05:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.234 05:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.234 ************************************ 00:05:58.234 END TEST scheduler_create_thread 00:05:58.234 ************************************ 00:05:58.234 05:22:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:58.234 05:22:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3144866 00:05:58.234 05:22:54 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3144866 ']' 00:05:58.234 05:22:54 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3144866 00:05:58.234 05:22:54 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:58.234 05:22:54 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.234 05:22:54 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3144866 00:05:58.234 05:22:54 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:58.234 05:22:54 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:58.234 05:22:54 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3144866' 00:05:58.234 killing process with pid 3144866 00:05:58.234 05:22:54 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3144866 00:05:58.234 05:22:54 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3144866 00:05:58.234 [2024-11-27 05:22:54.816287] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.611 00:05:59.611 real 0m5.953s 00:05:59.611 user 0m12.246s 00:05:59.611 sys 0m0.606s 00:05:59.611 05:22:55 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.611 05:22:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.611 ************************************ 00:05:59.611 END TEST event_scheduler 00:05:59.611 ************************************ 00:05:59.611 05:22:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.611 05:22:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.611 05:22:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.611 05:22:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.611 05:22:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.611 ************************************ 00:05:59.611 START TEST app_repeat 00:05:59.611 ************************************ 00:05:59.611 05:22:56 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3145983 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3145983' 00:05:59.611 Process app_repeat pid: 3145983 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.611 spdk_app_start Round 0 00:05:59.611 05:22:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3145983 /var/tmp/spdk-nbd.sock 00:05:59.611 05:22:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3145983 ']' 00:05:59.611 05:22:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.611 05:22:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.611 05:22:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.611 05:22:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.611 05:22:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.611 [2024-11-27 05:22:56.087017] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:59.611 [2024-11-27 05:22:56.087108] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3145983 ] 00:05:59.870 [2024-11-27 05:22:56.239313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.870 [2024-11-27 05:22:56.337533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.870 [2024-11-27 05:22:56.337543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.439 05:22:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.439 05:22:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.439 05:22:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.698 Malloc0 00:06:00.698 05:22:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.957 Malloc1 00:06:00.957 05:22:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.957 05:22:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.216 /dev/nbd0 00:06:01.216 05:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.216 05:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.216 1+0 records in 00:06:01.216 1+0 records out 00:06:01.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268334 s, 15.3 MB/s 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.216 05:22:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.216 05:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.216 05:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.216 05:22:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.475 /dev/nbd1 00:06:01.475 05:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.475 05:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.475 05:22:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:01.475 05:22:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.475 05:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.475 05:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.475 05:22:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:01.475 05:22:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.475 05:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.475 05:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.475 05:22:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.475 1+0 records in 00:06:01.475 1+0 records out 00:06:01.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227037 s, 18.0 MB/s 00:06:01.475 05:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.476 05:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.476 05:22:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.476 05:22:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.476 05:22:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.476 05:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.476 05:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.476 05:22:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.476 05:22:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.476 05:22:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.735 05:22:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.735 { 00:06:01.735 "nbd_device": "/dev/nbd0", 00:06:01.735 "bdev_name": "Malloc0" 00:06:01.735 }, 00:06:01.735 { 00:06:01.735 "nbd_device": "/dev/nbd1", 00:06:01.735 "bdev_name": "Malloc1" 00:06:01.735 } 00:06:01.735 ]' 00:06:01.735 05:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.735 { 00:06:01.736 "nbd_device": "/dev/nbd0", 00:06:01.736 "bdev_name": "Malloc0" 00:06:01.736 }, 00:06:01.736 { 00:06:01.736 "nbd_device": "/dev/nbd1", 00:06:01.736 "bdev_name": "Malloc1" 00:06:01.736 } 00:06:01.736 ]' 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.736 /dev/nbd1' 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.736 /dev/nbd1' 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.736 256+0 records in 00:06:01.736 256+0 records out 00:06:01.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115662 s, 90.7 MB/s 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.736 256+0 records in 00:06:01.736 256+0 records out 00:06:01.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211681 s, 49.5 MB/s 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.736 256+0 records in 00:06:01.736 256+0 records out 00:06:01.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171942 s, 61.0 MB/s 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.736 05:22:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.995 05:22:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.995 05:22:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.995 05:22:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.995 05:22:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.995 05:22:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.995 05:22:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.995 05:22:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.995 05:22:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.995 05:22:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.995 05:22:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.253 05:22:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.253 05:22:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.253 05:22:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.254 05:22:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.254 05:22:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.254 05:22:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.254 05:22:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.254 05:22:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.254 05:22:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.254 05:22:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.254 05:22:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.513 05:22:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.513 05:22:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.774 05:22:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.147 [2024-11-27 05:23:00.436832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.147 [2024-11-27 05:23:00.535100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.147 [2024-11-27 05:23:00.535100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.147 [2024-11-27 05:23:00.709377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.147 [2024-11-27 05:23:00.709438] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.051 05:23:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.051 05:23:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:06.051 spdk_app_start Round 1 00:06:06.051 05:23:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3145983 /var/tmp/spdk-nbd.sock 00:06:06.051 05:23:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3145983 ']' 00:06:06.051 05:23:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.051 05:23:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.051 05:23:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.051 05:23:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.051 05:23:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.051 05:23:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.051 05:23:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.051 05:23:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.309 Malloc0 00:06:06.309 05:23:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.568 Malloc1 00:06:06.568 05:23:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.568 05:23:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.568 /dev/nbd0 00:06:06.568 05:23:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.568 05:23:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.568 05:23:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:06.568 05:23:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.568 05:23:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.568 05:23:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.568 05:23:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:06.568 05:23:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.568 05:23:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.568 05:23:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.568 05:23:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.568 1+0 records in 00:06:06.568 1+0 records out 00:06:06.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160479 s, 25.5 MB/s 00:06:06.568 05:23:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.827 05:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.827 05:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.827 05:23:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.827 /dev/nbd1 00:06:06.827 05:23:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.827 05:23:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.827 1+0 records in 00:06:06.827 1+0 records out 00:06:06.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251576 s, 16.3 MB/s 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.827 05:23:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.827 05:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.827 05:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.085 05:23:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.085 05:23:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.085 05:23:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.086 { 00:06:07.086 "nbd_device": "/dev/nbd0", 00:06:07.086 "bdev_name": "Malloc0" 00:06:07.086 }, 00:06:07.086 { 00:06:07.086 "nbd_device": "/dev/nbd1", 00:06:07.086 "bdev_name": "Malloc1" 00:06:07.086 } 00:06:07.086 ]' 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.086 { 00:06:07.086 "nbd_device": "/dev/nbd0", 00:06:07.086 "bdev_name": "Malloc0" 00:06:07.086 }, 00:06:07.086 { 00:06:07.086 "nbd_device": "/dev/nbd1", 00:06:07.086 "bdev_name": "Malloc1" 00:06:07.086 } 00:06:07.086 ]' 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.086 /dev/nbd1' 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.086 /dev/nbd1' 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.086 256+0 records in 00:06:07.086 256+0 records out 00:06:07.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103782 s, 101 MB/s 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.086 05:23:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.345 256+0 records in 00:06:07.345 256+0 records out 00:06:07.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211509 s, 49.6 MB/s 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.345 256+0 records in 00:06:07.345 256+0 records out 00:06:07.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017545 s, 59.8 MB/s 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.345 05:23:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.603 05:23:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.603 05:23:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.603 05:23:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.603 05:23:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.603 05:23:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.603 05:23:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.603 05:23:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.603 05:23:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.604 05:23:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.604 05:23:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.862 05:23:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.862 05:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.862 05:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.862 05:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.862 05:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.862 05:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.862 05:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.862 05:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.862 05:23:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.863 05:23:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.863 05:23:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.863 05:23:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.863 05:23:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.429 05:23:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.366 [2024-11-27 05:23:05.914065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.625 [2024-11-27 05:23:06.010772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.625 [2024-11-27 05:23:06.010779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.625 [2024-11-27 05:23:06.182410] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.625 [2024-11-27 05:23:06.182468] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.528 05:23:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.528 05:23:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.528 spdk_app_start Round 2 00:06:11.528 05:23:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3145983 /var/tmp/spdk-nbd.sock 00:06:11.528 05:23:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3145983 ']' 00:06:11.528 05:23:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.528 05:23:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.528 05:23:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.528 05:23:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.529 05:23:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.529 05:23:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.529 05:23:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.529 05:23:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.787 Malloc0 00:06:11.787 05:23:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.071 Malloc1 00:06:12.071 05:23:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.071 05:23:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.071 05:23:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.071 05:23:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.071 05:23:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.071 05:23:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.071 05:23:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.071 05:23:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.071 05:23:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.071 05:23:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.071 05:23:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.072 05:23:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.072 05:23:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.072 05:23:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.072 05:23:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.072 05:23:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.330 /dev/nbd0 00:06:12.330 05:23:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.331 05:23:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.331 1+0 records in 00:06:12.331 1+0 records out 00:06:12.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242057 s, 16.9 MB/s 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.331 05:23:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.331 05:23:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.331 05:23:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.331 05:23:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.590 /dev/nbd1 00:06:12.590 05:23:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.590 05:23:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.590 1+0 records in 00:06:12.590 1+0 records out 00:06:12.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221589 s, 18.5 MB/s 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.590 05:23:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.590 05:23:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.590 05:23:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.590 05:23:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.590 05:23:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.590 05:23:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.590 05:23:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.590 { 00:06:12.590 "nbd_device": "/dev/nbd0", 00:06:12.590 "bdev_name": "Malloc0" 00:06:12.590 }, 00:06:12.590 { 00:06:12.590 "nbd_device": "/dev/nbd1", 00:06:12.590 "bdev_name": "Malloc1" 00:06:12.590 } 00:06:12.590 ]' 00:06:12.590 05:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.590 { 00:06:12.590 "nbd_device": "/dev/nbd0", 00:06:12.590 "bdev_name": "Malloc0" 00:06:12.590 }, 00:06:12.590 { 00:06:12.590 "nbd_device": "/dev/nbd1", 00:06:12.590 "bdev_name": "Malloc1" 00:06:12.590 } 00:06:12.590 ]' 00:06:12.590 05:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.848 /dev/nbd1' 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.848 /dev/nbd1' 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.848 05:23:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.849 256+0 records in 00:06:12.849 256+0 records out 00:06:12.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104707 s, 100 MB/s 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.849 256+0 records in 00:06:12.849 256+0 records out 00:06:12.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215257 s, 48.7 MB/s 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.849 256+0 records in 00:06:12.849 256+0 records out 00:06:12.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169422 s, 61.9 MB/s 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.849 05:23:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.107 05:23:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.107 05:23:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.107 05:23:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.107 05:23:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.107 05:23:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.107 05:23:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.107 05:23:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.107 05:23:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.107 05:23:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.107 05:23:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.366 05:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.624 05:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.625 05:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.625 05:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.625 05:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.625 05:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.625 05:23:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.625 05:23:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.625 05:23:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.625 05:23:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.625 05:23:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.883 05:23:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.261 [2024-11-27 05:23:11.525386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.261 [2024-11-27 05:23:11.624328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.261 [2024-11-27 05:23:11.624328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.261 [2024-11-27 05:23:11.797019] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.261 [2024-11-27 05:23:11.797076] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.166 05:23:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3145983 /var/tmp/spdk-nbd.sock 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3145983 ']' 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.166 05:23:13 event.app_repeat -- event/event.sh@39 -- # killprocess 3145983 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3145983 ']' 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3145983 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3145983 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3145983' 00:06:17.166 killing process with pid 3145983 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3145983 00:06:17.166 05:23:13 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3145983 00:06:18.103 spdk_app_start is called in Round 0. 00:06:18.103 Shutdown signal received, stop current app iteration 00:06:18.103 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:06:18.103 spdk_app_start is called in Round 1. 00:06:18.103 Shutdown signal received, stop current app iteration 00:06:18.103 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:06:18.103 spdk_app_start is called in Round 2. 00:06:18.103 Shutdown signal received, stop current app iteration 00:06:18.103 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:06:18.103 spdk_app_start is called in Round 3. 00:06:18.103 Shutdown signal received, stop current app iteration 00:06:18.103 05:23:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:18.103 05:23:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:18.103 00:06:18.103 real 0m18.566s 00:06:18.103 user 0m38.713s 00:06:18.103 sys 0m3.194s 00:06:18.103 05:23:14 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.103 05:23:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.103 ************************************ 00:06:18.103 END TEST app_repeat 00:06:18.103 ************************************ 00:06:18.103 05:23:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:18.103 05:23:14 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:18.103 05:23:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.103 05:23:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.103 05:23:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.103 ************************************ 00:06:18.103 START TEST cpu_locks 00:06:18.103 ************************************ 00:06:18.104 05:23:14 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:18.363 * Looking for test storage... 00:06:18.363 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.363 05:23:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.363 --rc genhtml_branch_coverage=1 00:06:18.363 --rc genhtml_function_coverage=1 00:06:18.363 --rc genhtml_legend=1 00:06:18.363 --rc geninfo_all_blocks=1 00:06:18.363 --rc geninfo_unexecuted_blocks=1 00:06:18.363 00:06:18.363 ' 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.363 --rc genhtml_branch_coverage=1 00:06:18.363 --rc genhtml_function_coverage=1 00:06:18.363 --rc genhtml_legend=1 00:06:18.363 --rc geninfo_all_blocks=1 00:06:18.363 --rc geninfo_unexecuted_blocks=1 00:06:18.363 00:06:18.363 ' 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.363 --rc genhtml_branch_coverage=1 00:06:18.363 --rc genhtml_function_coverage=1 00:06:18.363 --rc genhtml_legend=1 00:06:18.363 --rc geninfo_all_blocks=1 00:06:18.363 --rc geninfo_unexecuted_blocks=1 00:06:18.363 00:06:18.363 ' 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.363 --rc genhtml_branch_coverage=1 00:06:18.363 --rc genhtml_function_coverage=1 00:06:18.363 --rc genhtml_legend=1 00:06:18.363 --rc geninfo_all_blocks=1 00:06:18.363 --rc geninfo_unexecuted_blocks=1 00:06:18.363 00:06:18.363 ' 00:06:18.363 05:23:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:18.363 05:23:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:18.363 05:23:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:18.363 05:23:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.363 05:23:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.363 ************************************ 00:06:18.363 START TEST default_locks 00:06:18.363 ************************************ 00:06:18.363 05:23:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:18.363 05:23:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3149420 00:06:18.363 05:23:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3149420 00:06:18.363 05:23:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.363 05:23:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3149420 ']' 00:06:18.363 05:23:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.363 05:23:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.363 05:23:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.363 05:23:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.363 05:23:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.622 [2024-11-27 05:23:15.010421] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:18.622 [2024-11-27 05:23:15.010517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3149420 ] 00:06:18.622 [2024-11-27 05:23:15.159767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.882 [2024-11-27 05:23:15.256219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.450 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.450 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:19.450 05:23:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3149420 00:06:19.450 05:23:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3149420 00:06:19.450 05:23:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.018 lslocks: write error 00:06:20.018 05:23:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3149420 00:06:20.018 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3149420 ']' 00:06:20.018 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3149420 00:06:20.018 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:20.276 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.276 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3149420 00:06:20.276 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.276 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.276 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3149420' 00:06:20.276 killing process with pid 3149420 00:06:20.276 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3149420 00:06:20.276 05:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3149420 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3149420 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3149420 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3149420 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3149420 ']' 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.813 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3149420) - No such process 00:06:22.813 ERROR: process (pid: 3149420) is no longer running 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:22.813 00:06:22.813 real 0m3.955s 00:06:22.813 user 0m3.902s 00:06:22.813 sys 0m0.840s 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.813 05:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.813 ************************************ 00:06:22.813 END TEST default_locks 00:06:22.813 ************************************ 00:06:22.813 05:23:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:22.813 05:23:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.813 05:23:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.813 05:23:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.813 ************************************ 00:06:22.813 START TEST default_locks_via_rpc 00:06:22.813 ************************************ 00:06:22.813 05:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:22.813 05:23:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3150243 00:06:22.813 05:23:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3150243 00:06:22.813 05:23:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.813 05:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3150243 ']' 00:06:22.813 05:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.813 05:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.813 05:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.813 05:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.813 05:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.813 [2024-11-27 05:23:19.051147] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:22.813 [2024-11-27 05:23:19.051277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150243 ] 00:06:22.813 [2024-11-27 05:23:19.206539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.813 [2024-11-27 05:23:19.300847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.752 05:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.753 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.753 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.753 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.753 05:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3150243 00:06:23.753 05:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3150243 00:06:23.753 05:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3150243 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3150243 ']' 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3150243 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3150243 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3150243' 00:06:24.320 killing process with pid 3150243 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3150243 00:06:24.320 05:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3150243 00:06:26.860 00:06:26.860 real 0m3.984s 00:06:26.860 user 0m3.957s 00:06:26.860 sys 0m0.807s 00:06:26.860 05:23:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.860 05:23:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.860 ************************************ 00:06:26.860 END TEST default_locks_via_rpc 00:06:26.860 ************************************ 00:06:26.860 05:23:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:26.860 05:23:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.860 05:23:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.860 05:23:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.860 ************************************ 00:06:26.860 START TEST non_locking_app_on_locked_coremask 00:06:26.860 ************************************ 00:06:26.860 05:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:26.860 05:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3150867 00:06:26.860 05:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3150867 /var/tmp/spdk.sock 00:06:26.860 05:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3150867 ']' 00:06:26.860 05:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.860 05:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.860 05:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.860 05:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.860 05:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.860 05:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.861 [2024-11-27 05:23:23.108437] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:26.861 [2024-11-27 05:23:23.108548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150867 ] 00:06:26.861 [2024-11-27 05:23:23.263247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.861 [2024-11-27 05:23:23.363570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3151089 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3151089 /var/tmp/spdk2.sock 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3151089 ']' 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.801 05:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.801 [2024-11-27 05:23:24.167194] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:27.801 [2024-11-27 05:23:24.167310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151089 ] 00:06:27.801 [2024-11-27 05:23:24.381398] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.801 [2024-11-27 05:23:24.381460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.061 [2024-11-27 05:23:24.574644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.592 05:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.592 05:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.592 05:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3150867 00:06:30.592 05:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3150867 00:06:30.592 05:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.851 lslocks: write error 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3150867 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3150867 ']' 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3150867 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3150867 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3150867' 00:06:30.851 killing process with pid 3150867 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3150867 00:06:30.851 05:23:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3150867 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3151089 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3151089 ']' 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3151089 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3151089 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3151089' 00:06:36.123 killing process with pid 3151089 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3151089 00:06:36.123 05:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3151089 00:06:37.504 00:06:37.504 real 0m10.927s 00:06:37.504 user 0m11.068s 00:06:37.504 sys 0m1.390s 00:06:37.504 05:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.504 05:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.504 ************************************ 00:06:37.504 END TEST non_locking_app_on_locked_coremask 00:06:37.504 ************************************ 00:06:37.504 05:23:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:37.504 05:23:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.504 05:23:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.504 05:23:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.504 ************************************ 00:06:37.504 START TEST locking_app_on_unlocked_coremask 00:06:37.504 ************************************ 00:06:37.504 05:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:37.504 05:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3152861 00:06:37.505 05:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3152861 /var/tmp/spdk.sock 00:06:37.505 05:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:37.505 05:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3152861 ']' 00:06:37.505 05:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.505 05:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.505 05:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.505 05:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.505 05:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.765 [2024-11-27 05:23:34.120586] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:37.765 [2024-11-27 05:23:34.120688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152861 ] 00:06:37.765 [2024-11-27 05:23:34.274278] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.765 [2024-11-27 05:23:34.274323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.027 [2024-11-27 05:23:34.370568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.655 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.656 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.656 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3153000 00:06:38.656 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3153000 /var/tmp/spdk2.sock 00:06:38.656 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.656 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3153000 ']' 00:06:38.656 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.656 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.656 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.656 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.656 05:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.656 [2024-11-27 05:23:35.184207] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:38.656 [2024-11-27 05:23:35.184304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153000 ] 00:06:38.970 [2024-11-27 05:23:35.406226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.229 [2024-11-27 05:23:35.600171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.137 05:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.137 05:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.137 05:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3153000 00:06:41.137 05:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3153000 00:06:41.137 05:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.075 lslocks: write error 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3152861 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3152861 ']' 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3152861 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3152861 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3152861' 00:06:42.075 killing process with pid 3152861 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3152861 00:06:42.075 05:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3152861 00:06:47.354 05:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3153000 00:06:47.354 05:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3153000 ']' 00:06:47.354 05:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3153000 00:06:47.354 05:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.354 05:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.354 05:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3153000 00:06:47.354 05:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.354 05:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.354 05:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3153000' 00:06:47.354 killing process with pid 3153000 00:06:47.354 05:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3153000 00:06:47.354 05:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3153000 00:06:48.734 00:06:48.734 real 0m11.244s 00:06:48.734 user 0m11.454s 00:06:48.734 sys 0m1.602s 00:06:48.734 05:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.734 05:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.734 ************************************ 00:06:48.734 END TEST locking_app_on_unlocked_coremask 00:06:48.734 ************************************ 00:06:48.734 05:23:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:48.734 05:23:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.734 05:23:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.734 05:23:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.993 ************************************ 00:06:48.993 START TEST locking_app_on_locked_coremask 00:06:48.993 ************************************ 00:06:48.993 05:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:48.993 05:23:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3154901 00:06:48.993 05:23:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3154901 /var/tmp/spdk.sock 00:06:48.993 05:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3154901 ']' 00:06:48.994 05:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.994 05:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.994 05:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.994 05:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.994 05:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.994 05:23:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.994 [2024-11-27 05:23:45.448861] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:48.994 [2024-11-27 05:23:45.448972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154901 ] 00:06:49.253 [2024-11-27 05:23:45.600673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.253 [2024-11-27 05:23:45.699824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3155035 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3155035 /var/tmp/spdk2.sock 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3155035 /var/tmp/spdk2.sock 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3155035 /var/tmp/spdk2.sock 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3155035 ']' 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.192 05:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.192 [2024-11-27 05:23:46.501463] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:50.192 [2024-11-27 05:23:46.501577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155035 ] 00:06:50.192 [2024-11-27 05:23:46.715319] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3154901 has claimed it. 00:06:50.192 [2024-11-27 05:23:46.715375] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.760 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3155035) - No such process 00:06:50.760 ERROR: process (pid: 3155035) is no longer running 00:06:50.760 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.760 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:50.760 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:50.760 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.760 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.760 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.760 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3154901 00:06:50.760 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3154901 00:06:50.760 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.019 lslocks: write error 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3154901 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3154901 ']' 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3154901 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3154901 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3154901' 00:06:51.019 killing process with pid 3154901 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3154901 00:06:51.019 05:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3154901 00:06:53.553 00:06:53.554 real 0m4.428s 00:06:53.554 user 0m4.522s 00:06:53.554 sys 0m0.922s 00:06:53.554 05:23:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.554 05:23:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.554 ************************************ 00:06:53.554 END TEST locking_app_on_locked_coremask 00:06:53.554 ************************************ 00:06:53.554 05:23:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:53.554 05:23:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.554 05:23:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.554 05:23:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.554 ************************************ 00:06:53.554 START TEST locking_overlapped_coremask 00:06:53.554 ************************************ 00:06:53.554 05:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:53.554 05:23:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3155730 00:06:53.554 05:23:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3155730 /var/tmp/spdk.sock 00:06:53.554 05:23:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:53.554 05:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3155730 ']' 00:06:53.554 05:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.554 05:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.554 05:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.554 05:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.554 05:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.554 [2024-11-27 05:23:49.952233] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:53.554 [2024-11-27 05:23:49.952344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155730 ] 00:06:53.554 [2024-11-27 05:23:50.110371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.813 [2024-11-27 05:23:50.213975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.813 [2024-11-27 05:23:50.214046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.813 [2024-11-27 05:23:50.214048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3155822 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3155822 /var/tmp/spdk2.sock 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3155822 /var/tmp/spdk2.sock 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3155822 /var/tmp/spdk2.sock 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3155822 ']' 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.749 05:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.749 [2024-11-27 05:23:51.081537] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:54.749 [2024-11-27 05:23:51.081644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155822 ] 00:06:54.749 [2024-11-27 05:23:51.305545] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3155730 has claimed it. 00:06:54.749 [2024-11-27 05:23:51.305606] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.316 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3155822) - No such process 00:06:55.316 ERROR: process (pid: 3155822) is no longer running 00:06:55.316 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.316 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:55.316 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:55.316 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.316 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.316 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.316 05:23:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3155730 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3155730 ']' 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3155730 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3155730 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3155730' 00:06:55.317 killing process with pid 3155730 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3155730 00:06:55.317 05:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3155730 00:06:57.850 00:06:57.850 real 0m4.169s 00:06:57.850 user 0m11.318s 00:06:57.850 sys 0m0.763s 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.850 ************************************ 00:06:57.850 END TEST locking_overlapped_coremask 00:06:57.850 ************************************ 00:06:57.850 05:23:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:57.850 05:23:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.850 05:23:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.850 05:23:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.850 ************************************ 00:06:57.850 START TEST locking_overlapped_coremask_via_rpc 00:06:57.850 ************************************ 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3156389 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3156389 /var/tmp/spdk.sock 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3156389 ']' 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.850 05:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:57.850 [2024-11-27 05:23:54.199838] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:57.850 [2024-11-27 05:23:54.199932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156389 ] 00:06:57.850 [2024-11-27 05:23:54.352352] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.850 [2024-11-27 05:23:54.352395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.109 [2024-11-27 05:23:54.453947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.109 [2024-11-27 05:23:54.454013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.109 [2024-11-27 05:23:54.454024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3156587 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3156587 /var/tmp/spdk2.sock 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3156587 ']' 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.678 05:23:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.938 [2024-11-27 05:23:55.293003] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:58.938 [2024-11-27 05:23:55.293098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156587 ] 00:06:58.938 [2024-11-27 05:23:55.510288] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.938 [2024-11-27 05:23:55.510341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.197 [2024-11-27 05:23:55.725242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.197 [2024-11-27 05:23:55.725330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.197 [2024-11-27 05:23:55.725364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.733 [2024-11-27 05:23:57.818743] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3156389 has claimed it. 00:07:01.733 request: 00:07:01.733 { 00:07:01.733 "method": "framework_enable_cpumask_locks", 00:07:01.733 "req_id": 1 00:07:01.733 } 00:07:01.733 Got JSON-RPC error response 00:07:01.733 response: 00:07:01.733 { 00:07:01.733 "code": -32603, 00:07:01.733 "message": "Failed to claim CPU core: 2" 00:07:01.733 } 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3156389 /var/tmp/spdk.sock 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3156389 ']' 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.733 05:23:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3156587 /var/tmp/spdk2.sock 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3156587 ']' 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.733 00:07:01.733 real 0m4.114s 00:07:01.733 user 0m1.067s 00:07:01.733 sys 0m0.238s 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.733 05:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.733 ************************************ 00:07:01.733 END TEST locking_overlapped_coremask_via_rpc 00:07:01.733 ************************************ 00:07:01.733 05:23:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:01.733 05:23:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3156389 ]] 00:07:01.733 05:23:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3156389 00:07:01.733 05:23:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3156389 ']' 00:07:01.733 05:23:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3156389 00:07:01.733 05:23:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:01.733 05:23:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.733 05:23:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3156389 00:07:01.992 05:23:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.992 05:23:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.992 05:23:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3156389' 00:07:01.992 killing process with pid 3156389 00:07:01.992 05:23:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3156389 00:07:01.992 05:23:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3156389 00:07:04.530 05:24:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3156587 ]] 00:07:04.530 05:24:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3156587 00:07:04.530 05:24:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3156587 ']' 00:07:04.530 05:24:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3156587 00:07:04.530 05:24:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:04.530 05:24:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.530 05:24:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3156587 00:07:04.530 05:24:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:04.530 05:24:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:04.530 05:24:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3156587' 00:07:04.530 killing process with pid 3156587 00:07:04.530 05:24:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3156587 00:07:04.530 05:24:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3156587 00:07:07.063 05:24:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.063 05:24:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:07.063 05:24:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3156389 ]] 00:07:07.063 05:24:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3156389 00:07:07.063 05:24:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3156389 ']' 00:07:07.063 05:24:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3156389 00:07:07.063 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3156389) - No such process 00:07:07.063 05:24:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3156389 is not found' 00:07:07.063 Process with pid 3156389 is not found 00:07:07.063 05:24:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3156587 ]] 00:07:07.063 05:24:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3156587 00:07:07.063 05:24:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3156587 ']' 00:07:07.063 05:24:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3156587 00:07:07.063 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3156587) - No such process 00:07:07.063 05:24:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3156587 is not found' 00:07:07.063 Process with pid 3156587 is not found 00:07:07.063 05:24:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.063 00:07:07.063 real 0m48.356s 00:07:07.063 user 1m22.009s 00:07:07.063 sys 0m8.025s 00:07:07.063 05:24:03 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.063 05:24:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.063 ************************************ 00:07:07.063 END TEST cpu_locks 00:07:07.063 ************************************ 00:07:07.063 00:07:07.063 real 1m18.059s 00:07:07.063 user 2m20.202s 00:07:07.063 sys 0m12.807s 00:07:07.063 05:24:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.063 05:24:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.063 ************************************ 00:07:07.063 END TEST event 00:07:07.063 ************************************ 00:07:07.063 05:24:03 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:07.063 05:24:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.064 05:24:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.064 05:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:07.064 ************************************ 00:07:07.064 START TEST thread 00:07:07.064 ************************************ 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:07.064 * Looking for test storage... 00:07:07.064 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.064 05:24:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.064 05:24:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.064 05:24:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.064 05:24:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.064 05:24:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.064 05:24:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.064 05:24:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.064 05:24:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.064 05:24:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.064 05:24:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.064 05:24:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.064 05:24:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:07.064 05:24:03 thread -- scripts/common.sh@345 -- # : 1 00:07:07.064 05:24:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.064 05:24:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.064 05:24:03 thread -- scripts/common.sh@365 -- # decimal 1 00:07:07.064 05:24:03 thread -- scripts/common.sh@353 -- # local d=1 00:07:07.064 05:24:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.064 05:24:03 thread -- scripts/common.sh@355 -- # echo 1 00:07:07.064 05:24:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.064 05:24:03 thread -- scripts/common.sh@366 -- # decimal 2 00:07:07.064 05:24:03 thread -- scripts/common.sh@353 -- # local d=2 00:07:07.064 05:24:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.064 05:24:03 thread -- scripts/common.sh@355 -- # echo 2 00:07:07.064 05:24:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.064 05:24:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.064 05:24:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.064 05:24:03 thread -- scripts/common.sh@368 -- # return 0 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.064 --rc genhtml_branch_coverage=1 00:07:07.064 --rc genhtml_function_coverage=1 00:07:07.064 --rc genhtml_legend=1 00:07:07.064 --rc geninfo_all_blocks=1 00:07:07.064 --rc geninfo_unexecuted_blocks=1 00:07:07.064 00:07:07.064 ' 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.064 --rc genhtml_branch_coverage=1 00:07:07.064 --rc genhtml_function_coverage=1 00:07:07.064 --rc genhtml_legend=1 00:07:07.064 --rc geninfo_all_blocks=1 00:07:07.064 --rc geninfo_unexecuted_blocks=1 00:07:07.064 00:07:07.064 ' 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.064 --rc genhtml_branch_coverage=1 00:07:07.064 --rc genhtml_function_coverage=1 00:07:07.064 --rc genhtml_legend=1 00:07:07.064 --rc geninfo_all_blocks=1 00:07:07.064 --rc geninfo_unexecuted_blocks=1 00:07:07.064 00:07:07.064 ' 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.064 --rc genhtml_branch_coverage=1 00:07:07.064 --rc genhtml_function_coverage=1 00:07:07.064 --rc genhtml_legend=1 00:07:07.064 --rc geninfo_all_blocks=1 00:07:07.064 --rc geninfo_unexecuted_blocks=1 00:07:07.064 00:07:07.064 ' 00:07:07.064 05:24:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.064 05:24:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.064 ************************************ 00:07:07.064 START TEST thread_poller_perf 00:07:07.064 ************************************ 00:07:07.064 05:24:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.064 [2024-11-27 05:24:03.409838] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:07.064 [2024-11-27 05:24:03.409920] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158211 ] 00:07:07.064 [2024-11-27 05:24:03.563372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.323 [2024-11-27 05:24:03.666388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.323 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:08.701 [2024-11-27T04:24:05.288Z] ====================================== 00:07:08.701 [2024-11-27T04:24:05.288Z] busy:2511857460 (cyc) 00:07:08.701 [2024-11-27T04:24:05.288Z] total_run_count: 409000 00:07:08.701 [2024-11-27T04:24:05.288Z] tsc_hz: 2500000000 (cyc) 00:07:08.701 [2024-11-27T04:24:05.288Z] ====================================== 00:07:08.701 [2024-11-27T04:24:05.288Z] poller_cost: 6141 (cyc), 2456 (nsec) 00:07:08.701 00:07:08.701 real 0m1.516s 00:07:08.701 user 0m1.343s 00:07:08.701 sys 0m0.167s 00:07:08.701 05:24:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.701 05:24:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.701 ************************************ 00:07:08.701 END TEST thread_poller_perf 00:07:08.701 ************************************ 00:07:08.701 05:24:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.701 05:24:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:08.701 05:24:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.701 05:24:04 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.701 ************************************ 00:07:08.701 START TEST thread_poller_perf 00:07:08.701 ************************************ 00:07:08.701 05:24:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.701 [2024-11-27 05:24:05.006897] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:08.701 [2024-11-27 05:24:05.006980] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158810 ] 00:07:08.701 [2024-11-27 05:24:05.162931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.701 [2024-11-27 05:24:05.263264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.701 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:10.078 [2024-11-27T04:24:06.665Z] ====================================== 00:07:10.078 [2024-11-27T04:24:06.665Z] busy:2503324560 (cyc) 00:07:10.078 [2024-11-27T04:24:06.665Z] total_run_count: 5298000 00:07:10.078 [2024-11-27T04:24:06.665Z] tsc_hz: 2500000000 (cyc) 00:07:10.078 [2024-11-27T04:24:06.665Z] ====================================== 00:07:10.078 [2024-11-27T04:24:06.665Z] poller_cost: 472 (cyc), 188 (nsec) 00:07:10.078 00:07:10.078 real 0m1.508s 00:07:10.078 user 0m1.338s 00:07:10.078 sys 0m0.164s 00:07:10.078 05:24:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.078 05:24:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.078 ************************************ 00:07:10.078 END TEST thread_poller_perf 00:07:10.078 ************************************ 00:07:10.078 05:24:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:10.078 00:07:10.078 real 0m3.361s 00:07:10.078 user 0m2.827s 00:07:10.078 sys 0m0.545s 00:07:10.078 05:24:06 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.078 05:24:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.078 ************************************ 00:07:10.078 END TEST thread 00:07:10.078 ************************************ 00:07:10.078 05:24:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:10.078 05:24:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:10.078 05:24:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.078 05:24:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.078 05:24:06 -- common/autotest_common.sh@10 -- # set +x 00:07:10.078 ************************************ 00:07:10.078 START TEST app_cmdline 00:07:10.078 ************************************ 00:07:10.078 05:24:06 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:10.337 * Looking for test storage... 00:07:10.337 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.337 05:24:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.337 --rc genhtml_branch_coverage=1 00:07:10.337 --rc genhtml_function_coverage=1 00:07:10.337 --rc genhtml_legend=1 00:07:10.337 --rc geninfo_all_blocks=1 00:07:10.337 --rc geninfo_unexecuted_blocks=1 00:07:10.337 00:07:10.337 ' 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.337 --rc genhtml_branch_coverage=1 00:07:10.337 --rc genhtml_function_coverage=1 00:07:10.337 --rc genhtml_legend=1 00:07:10.337 --rc geninfo_all_blocks=1 00:07:10.337 --rc geninfo_unexecuted_blocks=1 00:07:10.337 00:07:10.337 ' 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.337 --rc genhtml_branch_coverage=1 00:07:10.337 --rc genhtml_function_coverage=1 00:07:10.337 --rc genhtml_legend=1 00:07:10.337 --rc geninfo_all_blocks=1 00:07:10.337 --rc geninfo_unexecuted_blocks=1 00:07:10.337 00:07:10.337 ' 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.337 --rc genhtml_branch_coverage=1 00:07:10.337 --rc genhtml_function_coverage=1 00:07:10.337 --rc genhtml_legend=1 00:07:10.337 --rc geninfo_all_blocks=1 00:07:10.337 --rc geninfo_unexecuted_blocks=1 00:07:10.337 00:07:10.337 ' 00:07:10.337 05:24:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:10.337 05:24:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3159406 00:07:10.337 05:24:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3159406 00:07:10.337 05:24:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3159406 ']' 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.337 05:24:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.337 [2024-11-27 05:24:06.865227] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:10.337 [2024-11-27 05:24:06.865324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159406 ] 00:07:10.597 [2024-11-27 05:24:07.017423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.597 [2024-11-27 05:24:07.112297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.536 05:24:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.536 05:24:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:11.536 05:24:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:11.536 { 00:07:11.536 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:07:11.536 "fields": { 00:07:11.536 "major": 25, 00:07:11.536 "minor": 1, 00:07:11.536 "patch": 0, 00:07:11.536 "suffix": "-pre", 00:07:11.536 "commit": "2f2acf4eb" 00:07:11.536 } 00:07:11.536 } 00:07:11.536 05:24:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:11.536 05:24:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:11.536 05:24:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:11.536 05:24:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:11.536 05:24:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:11.536 05:24:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.536 05:24:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.536 05:24:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:11.536 05:24:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:11.536 05:24:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:11.536 05:24:08 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.795 request: 00:07:11.795 { 00:07:11.795 "method": "env_dpdk_get_mem_stats", 00:07:11.795 "req_id": 1 00:07:11.795 } 00:07:11.795 Got JSON-RPC error response 00:07:11.795 response: 00:07:11.795 { 00:07:11.795 "code": -32601, 00:07:11.795 "message": "Method not found" 00:07:11.795 } 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.795 05:24:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3159406 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3159406 ']' 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3159406 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3159406 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3159406' 00:07:11.795 killing process with pid 3159406 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@973 -- # kill 3159406 00:07:11.795 05:24:08 app_cmdline -- common/autotest_common.sh@978 -- # wait 3159406 00:07:14.333 00:07:14.333 real 0m3.953s 00:07:14.333 user 0m4.095s 00:07:14.333 sys 0m0.679s 00:07:14.333 05:24:10 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.333 05:24:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.333 ************************************ 00:07:14.333 END TEST app_cmdline 00:07:14.333 ************************************ 00:07:14.333 05:24:10 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:14.333 05:24:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.333 05:24:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.333 05:24:10 -- common/autotest_common.sh@10 -- # set +x 00:07:14.333 ************************************ 00:07:14.333 START TEST version 00:07:14.333 ************************************ 00:07:14.333 05:24:10 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:14.333 * Looking for test storage... 00:07:14.333 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:14.333 05:24:10 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.333 05:24:10 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.333 05:24:10 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.333 05:24:10 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.333 05:24:10 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.333 05:24:10 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.333 05:24:10 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.333 05:24:10 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.333 05:24:10 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.333 05:24:10 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.333 05:24:10 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.333 05:24:10 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.333 05:24:10 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.333 05:24:10 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.333 05:24:10 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.333 05:24:10 version -- scripts/common.sh@344 -- # case "$op" in 00:07:14.333 05:24:10 version -- scripts/common.sh@345 -- # : 1 00:07:14.333 05:24:10 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.333 05:24:10 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.333 05:24:10 version -- scripts/common.sh@365 -- # decimal 1 00:07:14.333 05:24:10 version -- scripts/common.sh@353 -- # local d=1 00:07:14.333 05:24:10 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.333 05:24:10 version -- scripts/common.sh@355 -- # echo 1 00:07:14.333 05:24:10 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.333 05:24:10 version -- scripts/common.sh@366 -- # decimal 2 00:07:14.333 05:24:10 version -- scripts/common.sh@353 -- # local d=2 00:07:14.333 05:24:10 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.333 05:24:10 version -- scripts/common.sh@355 -- # echo 2 00:07:14.333 05:24:10 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.333 05:24:10 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.333 05:24:10 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.333 05:24:10 version -- scripts/common.sh@368 -- # return 0 00:07:14.333 05:24:10 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.333 05:24:10 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.333 --rc genhtml_branch_coverage=1 00:07:14.333 --rc genhtml_function_coverage=1 00:07:14.333 --rc genhtml_legend=1 00:07:14.333 --rc geninfo_all_blocks=1 00:07:14.333 --rc geninfo_unexecuted_blocks=1 00:07:14.333 00:07:14.333 ' 00:07:14.333 05:24:10 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.333 --rc genhtml_branch_coverage=1 00:07:14.333 --rc genhtml_function_coverage=1 00:07:14.333 --rc genhtml_legend=1 00:07:14.333 --rc geninfo_all_blocks=1 00:07:14.333 --rc geninfo_unexecuted_blocks=1 00:07:14.333 00:07:14.333 ' 00:07:14.333 05:24:10 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.333 --rc genhtml_branch_coverage=1 00:07:14.333 --rc genhtml_function_coverage=1 00:07:14.333 --rc genhtml_legend=1 00:07:14.333 --rc geninfo_all_blocks=1 00:07:14.333 --rc geninfo_unexecuted_blocks=1 00:07:14.333 00:07:14.333 ' 00:07:14.333 05:24:10 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.333 --rc genhtml_branch_coverage=1 00:07:14.333 --rc genhtml_function_coverage=1 00:07:14.333 --rc genhtml_legend=1 00:07:14.333 --rc geninfo_all_blocks=1 00:07:14.333 --rc geninfo_unexecuted_blocks=1 00:07:14.333 00:07:14.333 ' 00:07:14.333 05:24:10 version -- app/version.sh@17 -- # get_header_version major 00:07:14.333 05:24:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:14.333 05:24:10 version -- app/version.sh@14 -- # cut -f2 00:07:14.333 05:24:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.333 05:24:10 version -- app/version.sh@17 -- # major=25 00:07:14.333 05:24:10 version -- app/version.sh@18 -- # get_header_version minor 00:07:14.333 05:24:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.333 05:24:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:14.333 05:24:10 version -- app/version.sh@14 -- # cut -f2 00:07:14.333 05:24:10 version -- app/version.sh@18 -- # minor=1 00:07:14.333 05:24:10 version -- app/version.sh@19 -- # get_header_version patch 00:07:14.333 05:24:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:14.333 05:24:10 version -- app/version.sh@14 -- # cut -f2 00:07:14.333 05:24:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.333 05:24:10 version -- app/version.sh@19 -- # patch=0 00:07:14.333 05:24:10 version -- app/version.sh@20 -- # get_header_version suffix 00:07:14.333 05:24:10 version -- app/version.sh@14 -- # cut -f2 00:07:14.334 05:24:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:14.334 05:24:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.334 05:24:10 version -- app/version.sh@20 -- # suffix=-pre 00:07:14.334 05:24:10 version -- app/version.sh@22 -- # version=25.1 00:07:14.334 05:24:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:14.334 05:24:10 version -- app/version.sh@28 -- # version=25.1rc0 00:07:14.334 05:24:10 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:14.334 05:24:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:14.334 05:24:10 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:14.334 05:24:10 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:14.334 00:07:14.334 real 0m0.262s 00:07:14.334 user 0m0.154s 00:07:14.334 sys 0m0.160s 00:07:14.334 05:24:10 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.334 05:24:10 version -- common/autotest_common.sh@10 -- # set +x 00:07:14.334 ************************************ 00:07:14.334 END TEST version 00:07:14.334 ************************************ 00:07:14.593 05:24:10 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:14.593 05:24:10 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:14.593 05:24:10 -- spdk/autotest.sh@194 -- # uname -s 00:07:14.593 05:24:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:14.593 05:24:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:14.593 05:24:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:14.593 05:24:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:14.593 05:24:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:14.593 05:24:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:14.593 05:24:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.593 05:24:10 -- common/autotest_common.sh@10 -- # set +x 00:07:14.593 05:24:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:14.593 05:24:10 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:14.593 05:24:10 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:14.593 05:24:10 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:14.593 05:24:10 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:07:14.593 05:24:10 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:14.593 05:24:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.593 05:24:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.593 05:24:10 -- common/autotest_common.sh@10 -- # set +x 00:07:14.593 ************************************ 00:07:14.593 START TEST nvmf_rdma 00:07:14.593 ************************************ 00:07:14.593 05:24:11 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:14.593 * Looking for test storage... 00:07:14.593 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:14.593 05:24:11 nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.593 05:24:11 nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.593 05:24:11 nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.853 05:24:11 nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.853 05:24:11 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:07:14.853 05:24:11 nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.853 05:24:11 nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.853 --rc genhtml_branch_coverage=1 00:07:14.853 --rc genhtml_function_coverage=1 00:07:14.853 --rc genhtml_legend=1 00:07:14.853 --rc geninfo_all_blocks=1 00:07:14.853 --rc geninfo_unexecuted_blocks=1 00:07:14.853 00:07:14.853 ' 00:07:14.853 05:24:11 nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.853 --rc genhtml_branch_coverage=1 00:07:14.853 --rc genhtml_function_coverage=1 00:07:14.853 --rc genhtml_legend=1 00:07:14.853 --rc geninfo_all_blocks=1 00:07:14.853 --rc geninfo_unexecuted_blocks=1 00:07:14.853 00:07:14.853 ' 00:07:14.853 05:24:11 nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.853 --rc genhtml_branch_coverage=1 00:07:14.853 --rc genhtml_function_coverage=1 00:07:14.853 --rc genhtml_legend=1 00:07:14.853 --rc geninfo_all_blocks=1 00:07:14.853 --rc geninfo_unexecuted_blocks=1 00:07:14.853 00:07:14.853 ' 00:07:14.853 05:24:11 nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.853 --rc genhtml_branch_coverage=1 00:07:14.853 --rc genhtml_function_coverage=1 00:07:14.853 --rc genhtml_legend=1 00:07:14.853 --rc geninfo_all_blocks=1 00:07:14.853 --rc geninfo_unexecuted_blocks=1 00:07:14.853 00:07:14.853 ' 00:07:14.853 05:24:11 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:14.853 05:24:11 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:14.853 05:24:11 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:14.853 05:24:11 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.853 05:24:11 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.853 05:24:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:14.853 ************************************ 00:07:14.853 START TEST nvmf_target_core 00:07:14.853 ************************************ 00:07:14.853 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:14.853 * Looking for test storage... 00:07:14.853 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:14.853 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.854 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.114 --rc genhtml_branch_coverage=1 00:07:15.114 --rc genhtml_function_coverage=1 00:07:15.114 --rc genhtml_legend=1 00:07:15.114 --rc geninfo_all_blocks=1 00:07:15.114 --rc geninfo_unexecuted_blocks=1 00:07:15.114 00:07:15.114 ' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.114 --rc genhtml_branch_coverage=1 00:07:15.114 --rc genhtml_function_coverage=1 00:07:15.114 --rc genhtml_legend=1 00:07:15.114 --rc geninfo_all_blocks=1 00:07:15.114 --rc geninfo_unexecuted_blocks=1 00:07:15.114 00:07:15.114 ' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.114 --rc genhtml_branch_coverage=1 00:07:15.114 --rc genhtml_function_coverage=1 00:07:15.114 --rc genhtml_legend=1 00:07:15.114 --rc geninfo_all_blocks=1 00:07:15.114 --rc geninfo_unexecuted_blocks=1 00:07:15.114 00:07:15.114 ' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.114 --rc genhtml_branch_coverage=1 00:07:15.114 --rc genhtml_function_coverage=1 00:07:15.114 --rc genhtml_legend=1 00:07:15.114 --rc geninfo_all_blocks=1 00:07:15.114 --rc geninfo_unexecuted_blocks=1 00:07:15.114 00:07:15.114 ' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:15.114 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.114 ************************************ 00:07:15.114 START TEST nvmf_abort 00:07:15.114 ************************************ 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:15.114 * Looking for test storage... 00:07:15.114 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:15.114 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:15.115 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:15.115 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.375 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.376 --rc genhtml_branch_coverage=1 00:07:15.376 --rc genhtml_function_coverage=1 00:07:15.376 --rc genhtml_legend=1 00:07:15.376 --rc geninfo_all_blocks=1 00:07:15.376 --rc geninfo_unexecuted_blocks=1 00:07:15.376 00:07:15.376 ' 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.376 --rc genhtml_branch_coverage=1 00:07:15.376 --rc genhtml_function_coverage=1 00:07:15.376 --rc genhtml_legend=1 00:07:15.376 --rc geninfo_all_blocks=1 00:07:15.376 --rc geninfo_unexecuted_blocks=1 00:07:15.376 00:07:15.376 ' 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.376 --rc genhtml_branch_coverage=1 00:07:15.376 --rc genhtml_function_coverage=1 00:07:15.376 --rc genhtml_legend=1 00:07:15.376 --rc geninfo_all_blocks=1 00:07:15.376 --rc geninfo_unexecuted_blocks=1 00:07:15.376 00:07:15.376 ' 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.376 --rc genhtml_branch_coverage=1 00:07:15.376 --rc genhtml_function_coverage=1 00:07:15.376 --rc genhtml_legend=1 00:07:15.376 --rc geninfo_all_blocks=1 00:07:15.376 --rc geninfo_unexecuted_blocks=1 00:07:15.376 00:07:15.376 ' 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:15.376 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:15.376 05:24:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.363 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:25.364 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:25.364 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:25.364 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:25.364 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:25.364 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:25.364 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:25.364 altname enp217s0f0np0 00:07:25.364 altname ens818f0np0 00:07:25.364 inet 192.168.100.8/24 scope global mlx_0_0 00:07:25.364 valid_lft forever preferred_lft forever 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:25.364 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:25.364 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:25.364 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:25.364 altname enp217s0f1np1 00:07:25.364 altname ens818f1np1 00:07:25.365 inet 192.168.100.9/24 scope global mlx_0_1 00:07:25.365 valid_lft forever preferred_lft forever 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:25.365 192.168.100.9' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:25.365 192.168.100.9' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:25.365 192.168.100.9' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3164579 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3164579 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3164579 ']' 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.365 05:24:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:25.365 [2024-11-27 05:24:20.709470] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:25.365 [2024-11-27 05:24:20.709584] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.365 [2024-11-27 05:24:20.868017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.365 [2024-11-27 05:24:20.975314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.365 [2024-11-27 05:24:20.975362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.365 [2024-11-27 05:24:20.975375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.365 [2024-11-27 05:24:20.975388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.365 [2024-11-27 05:24:20.975398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.365 [2024-11-27 05:24:20.977791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.365 [2024-11-27 05:24:20.977854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.365 [2024-11-27 05:24:20.977862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.365 [2024-11-27 05:24:21.589340] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f2e3c592940) succeed. 00:07:25.365 [2024-11-27 05:24:21.604751] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f2e3c54e940) succeed. 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.365 Malloc0 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.365 Delay0 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.365 [2024-11-27 05:24:21.922642] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:25.365 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.366 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:25.366 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.366 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.366 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.366 05:24:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:25.625 [2024-11-27 05:24:22.077931] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:28.162 Initializing NVMe Controllers 00:07:28.162 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:28.162 controller IO queue size 128 less than required 00:07:28.162 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:28.162 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:28.162 Initialization complete. Launching workers. 00:07:28.162 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37904 00:07:28.163 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37965, failed to submit 62 00:07:28.163 success 37907, unsuccessful 58, failed 0 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:28.163 rmmod nvme_rdma 00:07:28.163 rmmod nvme_fabrics 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3164579 ']' 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3164579 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3164579 ']' 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3164579 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3164579 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3164579' 00:07:28.163 killing process with pid 3164579 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3164579 00:07:28.163 05:24:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3164579 00:07:29.542 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:29.542 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:29.542 00:07:29.542 real 0m14.528s 00:07:29.542 user 0m19.035s 00:07:29.542 sys 0m7.545s 00:07:29.542 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.542 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.542 ************************************ 00:07:29.542 END TEST nvmf_abort 00:07:29.542 ************************************ 00:07:29.542 05:24:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:29.542 05:24:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.542 05:24:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.542 05:24:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.803 ************************************ 00:07:29.803 START TEST nvmf_ns_hotplug_stress 00:07:29.803 ************************************ 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:29.803 * Looking for test storage... 00:07:29.803 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.803 --rc genhtml_branch_coverage=1 00:07:29.803 --rc genhtml_function_coverage=1 00:07:29.803 --rc genhtml_legend=1 00:07:29.803 --rc geninfo_all_blocks=1 00:07:29.803 --rc geninfo_unexecuted_blocks=1 00:07:29.803 00:07:29.803 ' 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.803 --rc genhtml_branch_coverage=1 00:07:29.803 --rc genhtml_function_coverage=1 00:07:29.803 --rc genhtml_legend=1 00:07:29.803 --rc geninfo_all_blocks=1 00:07:29.803 --rc geninfo_unexecuted_blocks=1 00:07:29.803 00:07:29.803 ' 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.803 --rc genhtml_branch_coverage=1 00:07:29.803 --rc genhtml_function_coverage=1 00:07:29.803 --rc genhtml_legend=1 00:07:29.803 --rc geninfo_all_blocks=1 00:07:29.803 --rc geninfo_unexecuted_blocks=1 00:07:29.803 00:07:29.803 ' 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.803 --rc genhtml_branch_coverage=1 00:07:29.803 --rc genhtml_function_coverage=1 00:07:29.803 --rc genhtml_legend=1 00:07:29.803 --rc geninfo_all_blocks=1 00:07:29.803 --rc geninfo_unexecuted_blocks=1 00:07:29.803 00:07:29.803 ' 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.803 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.804 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.804 05:24:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.789 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:39.790 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:39.790 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:39.790 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:39.790 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:39.790 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:39.791 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:39.791 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:39.791 altname enp217s0f0np0 00:07:39.791 altname ens818f0np0 00:07:39.791 inet 192.168.100.8/24 scope global mlx_0_0 00:07:39.791 valid_lft forever preferred_lft forever 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:39.791 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:39.791 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:39.791 altname enp217s0f1np1 00:07:39.791 altname ens818f1np1 00:07:39.791 inet 192.168.100.9/24 scope global mlx_0_1 00:07:39.791 valid_lft forever preferred_lft forever 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:39.791 192.168.100.9' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:39.791 192.168.100.9' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:39.791 192.168.100.9' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3169612 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3169612 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3169612 ']' 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.791 05:24:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.791 [2024-11-27 05:24:34.898190] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:39.791 [2024-11-27 05:24:34.898286] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.791 [2024-11-27 05:24:35.049492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.791 [2024-11-27 05:24:35.147577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.791 [2024-11-27 05:24:35.147634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.791 [2024-11-27 05:24:35.147646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.791 [2024-11-27 05:24:35.147678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.791 [2024-11-27 05:24:35.147688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.791 [2024-11-27 05:24:35.149872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.791 [2024-11-27 05:24:35.149936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.791 [2024-11-27 05:24:35.149943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.791 05:24:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.791 05:24:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:39.791 05:24:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.791 05:24:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.792 05:24:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.792 05:24:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.792 05:24:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:39.792 05:24:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:39.792 [2024-11-27 05:24:35.943312] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f757e53e940) succeed. 00:07:39.792 [2024-11-27 05:24:35.952738] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f757e3bd940) succeed. 00:07:39.792 05:24:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:39.792 05:24:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:40.052 [2024-11-27 05:24:36.536256] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:40.052 05:24:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:40.311 05:24:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:40.569 Malloc0 00:07:40.569 05:24:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:40.569 Delay0 00:07:40.827 05:24:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.827 05:24:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:41.085 NULL1 00:07:41.085 05:24:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:41.344 05:24:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:41.344 05:24:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3170170 00:07:41.344 05:24:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:41.344 05:24:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.603 05:24:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.603 05:24:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:41.603 05:24:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:41.861 true 00:07:41.861 05:24:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:41.861 05:24:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.120 05:24:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.379 05:24:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:42.379 05:24:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:42.379 true 00:07:42.379 05:24:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:42.379 05:24:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.638 05:24:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.896 05:24:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:42.896 05:24:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:43.155 true 00:07:43.155 05:24:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:43.155 05:24:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.413 05:24:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.413 05:24:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:43.413 05:24:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:43.672 true 00:07:43.672 05:24:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:43.672 05:24:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.931 05:24:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.191 05:24:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:44.191 05:24:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:44.191 true 00:07:44.191 05:24:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:44.191 05:24:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.450 05:24:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.710 05:24:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:44.710 05:24:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:44.968 true 00:07:44.968 05:24:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:44.968 05:24:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.228 05:24:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.228 05:24:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:45.228 05:24:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:45.487 true 00:07:45.487 05:24:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:45.487 05:24:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.746 05:24:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.005 05:24:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:46.005 05:24:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:46.005 true 00:07:46.005 05:24:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:46.005 05:24:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.263 05:24:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.522 05:24:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:46.522 05:24:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:46.781 true 00:07:46.781 05:24:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:46.781 05:24:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.781 05:24:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.039 05:24:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:47.039 05:24:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:47.298 true 00:07:47.298 05:24:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:47.298 05:24:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.557 05:24:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.816 05:24:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:47.816 05:24:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:47.816 true 00:07:47.816 05:24:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:47.816 05:24:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.076 05:24:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.336 05:24:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:48.336 05:24:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:48.596 true 00:07:48.596 05:24:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:48.596 05:24:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.596 05:24:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.856 05:24:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:48.856 05:24:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:49.116 true 00:07:49.116 05:24:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:49.116 05:24:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.376 05:24:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.376 05:24:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:49.376 05:24:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:49.636 true 00:07:49.636 05:24:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:49.636 05:24:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.895 05:24:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.154 05:24:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:50.154 05:24:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:50.154 true 00:07:50.154 05:24:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:50.154 05:24:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.413 05:24:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.768 05:24:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:50.768 05:24:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:50.768 true 00:07:50.768 05:24:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:50.768 05:24:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.080 05:24:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.379 05:24:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:51.379 05:24:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:51.379 true 00:07:51.379 05:24:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:51.379 05:24:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.645 05:24:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.904 05:24:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:51.904 05:24:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:52.163 true 00:07:52.163 05:24:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:52.163 05:24:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.163 05:24:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.422 05:24:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:52.422 05:24:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:52.688 true 00:07:52.688 05:24:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:52.688 05:24:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.947 05:24:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.947 05:24:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:52.947 05:24:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:53.206 true 00:07:53.206 05:24:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:53.206 05:24:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.465 05:24:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.724 05:24:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:53.724 05:24:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:53.983 true 00:07:53.983 05:24:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:53.983 05:24:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.983 05:24:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.243 05:24:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:54.243 05:24:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:54.502 true 00:07:54.502 05:24:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:54.502 05:24:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.762 05:24:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.762 05:24:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:54.762 05:24:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:55.021 true 00:07:55.021 05:24:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:55.021 05:24:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.280 05:24:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.539 05:24:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:55.539 05:24:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:55.539 true 00:07:55.798 05:24:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:55.798 05:24:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.798 05:24:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.056 05:24:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:56.056 05:24:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:56.315 true 00:07:56.315 05:24:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:56.315 05:24:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.574 05:24:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.574 05:24:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:56.574 05:24:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:56.833 true 00:07:56.833 05:24:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:56.833 05:24:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.093 05:24:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.352 05:24:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:57.352 05:24:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:57.352 true 00:07:57.352 05:24:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:57.353 05:24:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.613 05:24:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.872 05:24:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:57.872 05:24:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:58.131 true 00:07:58.131 05:24:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:58.131 05:24:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.390 05:24:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.390 05:24:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:58.390 05:24:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:58.648 true 00:07:58.648 05:24:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:58.648 05:24:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.908 05:24:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.167 05:24:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:59.167 05:24:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:59.167 true 00:07:59.167 05:24:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:59.167 05:24:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.425 05:24:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.684 05:24:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:59.684 05:24:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:59.943 true 00:07:59.943 05:24:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:07:59.943 05:24:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.943 05:24:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.202 05:24:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:00.202 05:24:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:00.461 true 00:08:00.461 05:24:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:00.461 05:24:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.721 05:24:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.980 05:24:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:00.980 05:24:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:00.980 true 00:08:00.980 05:24:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:00.980 05:24:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.239 05:24:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.497 05:24:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:01.497 05:24:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:01.497 true 00:08:01.755 05:24:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:01.755 05:24:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.755 05:24:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.014 05:24:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:02.014 05:24:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:02.273 true 00:08:02.273 05:24:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:02.273 05:24:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.532 05:24:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.800 05:24:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:02.800 05:24:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:02.800 true 00:08:02.800 05:24:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:02.801 05:24:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.060 05:24:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.320 05:24:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:03.320 05:24:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:03.320 true 00:08:03.578 05:24:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:03.578 05:24:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.578 05:25:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.837 05:25:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:03.837 05:25:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:04.095 true 00:08:04.096 05:25:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:04.096 05:25:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.355 05:25:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.355 05:25:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:04.355 05:25:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:04.615 true 00:08:04.615 05:25:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:04.615 05:25:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.876 05:25:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.134 05:25:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:05.134 05:25:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:05.134 true 00:08:05.393 05:25:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:05.393 05:25:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.393 05:25:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.653 05:25:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:05.653 05:25:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:05.911 true 00:08:05.911 05:25:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:05.911 05:25:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.170 05:25:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.170 05:25:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:06.170 05:25:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:06.429 true 00:08:06.429 05:25:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:06.429 05:25:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.689 05:25:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.948 05:25:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:06.948 05:25:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:06.948 true 00:08:06.948 05:25:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:06.948 05:25:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.207 05:25:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.466 05:25:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:07.466 05:25:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:07.725 true 00:08:07.725 05:25:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:07.725 05:25:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.725 05:25:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.984 05:25:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:07.985 05:25:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:08.244 true 00:08:08.244 05:25:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:08.244 05:25:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.503 05:25:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.503 05:25:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:08.503 05:25:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:08.762 true 00:08:08.762 05:25:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:08.762 05:25:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.022 05:25:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.282 05:25:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:09.282 05:25:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:09.282 true 00:08:09.542 05:25:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:09.542 05:25:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.542 05:25:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.801 05:25:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:09.801 05:25:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:10.061 true 00:08:10.061 05:25:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:10.061 05:25:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.320 05:25:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.320 05:25:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:10.320 05:25:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:10.579 true 00:08:10.579 05:25:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:10.579 05:25:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.838 05:25:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.838 05:25:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:10.838 05:25:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:11.097 true 00:08:11.097 05:25:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:11.097 05:25:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.357 05:25:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.617 05:25:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:11.617 05:25:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:11.617 true 00:08:11.876 05:25:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:11.876 05:25:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.876 05:25:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.135 05:25:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:08:12.135 05:25:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:08:12.394 true 00:08:12.394 05:25:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:12.395 05:25:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.654 05:25:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.654 Initializing NVMe Controllers 00:08:12.654 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:12.654 Controller IO queue size 128, less than required. 00:08:12.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.654 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:12.654 Initialization complete. Launching workers. 00:08:12.654 ======================================================== 00:08:12.654 Latency(us) 00:08:12.654 Device Information : IOPS MiB/s Average min max 00:08:12.654 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36100.30 17.63 3545.47 2019.04 5257.87 00:08:12.654 ======================================================== 00:08:12.654 Total : 36100.30 17.63 3545.47 2019.04 5257.87 00:08:12.654 00:08:12.654 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:08:12.654 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:08:12.913 true 00:08:12.913 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170170 00:08:12.913 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3170170) - No such process 00:08:12.913 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3170170 00:08:12.913 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.172 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.432 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:13.432 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:13.432 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:13.432 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.432 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:13.432 null0 00:08:13.432 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.432 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.432 05:25:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:13.692 null1 00:08:13.692 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.692 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.692 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:13.950 null2 00:08:13.950 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.950 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.950 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:14.209 null3 00:08:14.209 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.209 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.210 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:14.210 null4 00:08:14.210 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.210 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.210 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:14.469 null5 00:08:14.469 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.469 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.469 05:25:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:14.728 null6 00:08:14.728 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.728 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.728 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:14.988 null7 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3176187 3176188 3176189 3176192 3176194 3176195 3176197 3176199 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.988 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.248 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.508 05:25:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.508 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.508 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.508 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.508 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.508 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.508 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.508 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.508 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.767 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.767 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.767 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.767 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.767 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.768 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.027 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.027 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.027 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.027 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.027 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.027 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.027 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.027 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.287 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.547 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.547 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.547 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.547 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.547 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.547 05:25:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.547 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.806 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.806 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.806 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.806 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.806 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.807 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.807 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.807 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.066 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.066 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.066 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.066 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.066 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.066 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.066 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.066 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.066 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.067 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.327 05:25:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.587 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.587 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.587 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.587 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.587 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.587 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.587 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.587 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.846 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.847 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.847 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.847 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.847 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.847 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.847 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.847 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.847 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.847 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.847 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.105 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.106 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.106 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.106 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.106 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.106 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.106 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.106 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.365 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.625 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.625 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.625 05:25:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.625 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.885 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.885 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.885 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.885 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.885 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.885 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.885 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.885 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:19.144 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:19.145 rmmod nvme_rdma 00:08:19.145 rmmod nvme_fabrics 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3169612 ']' 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3169612 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3169612 ']' 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3169612 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3169612 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3169612' 00:08:19.145 killing process with pid 3169612 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3169612 00:08:19.145 05:25:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3169612 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:21.054 00:08:21.054 real 0m51.178s 00:08:21.054 user 3m33.276s 00:08:21.054 sys 0m17.983s 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.054 ************************************ 00:08:21.054 END TEST nvmf_ns_hotplug_stress 00:08:21.054 ************************************ 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.054 ************************************ 00:08:21.054 START TEST nvmf_delete_subsystem 00:08:21.054 ************************************ 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:21.054 * Looking for test storage... 00:08:21.054 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.054 --rc genhtml_branch_coverage=1 00:08:21.054 --rc genhtml_function_coverage=1 00:08:21.054 --rc genhtml_legend=1 00:08:21.054 --rc geninfo_all_blocks=1 00:08:21.054 --rc geninfo_unexecuted_blocks=1 00:08:21.054 00:08:21.054 ' 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.054 --rc genhtml_branch_coverage=1 00:08:21.054 --rc genhtml_function_coverage=1 00:08:21.054 --rc genhtml_legend=1 00:08:21.054 --rc geninfo_all_blocks=1 00:08:21.054 --rc geninfo_unexecuted_blocks=1 00:08:21.054 00:08:21.054 ' 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.054 --rc genhtml_branch_coverage=1 00:08:21.054 --rc genhtml_function_coverage=1 00:08:21.054 --rc genhtml_legend=1 00:08:21.054 --rc geninfo_all_blocks=1 00:08:21.054 --rc geninfo_unexecuted_blocks=1 00:08:21.054 00:08:21.054 ' 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.054 --rc genhtml_branch_coverage=1 00:08:21.054 --rc genhtml_function_coverage=1 00:08:21.054 --rc genhtml_legend=1 00:08:21.054 --rc geninfo_all_blocks=1 00:08:21.054 --rc geninfo_unexecuted_blocks=1 00:08:21.054 00:08:21.054 ' 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.054 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.055 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:21.055 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.055 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.055 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.055 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.055 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.055 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.314 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.315 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.315 05:25:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:29.442 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:29.442 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:29.443 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:29.443 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:29.443 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:29.443 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:29.443 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:29.443 altname enp217s0f0np0 00:08:29.443 altname ens818f0np0 00:08:29.443 inet 192.168.100.8/24 scope global mlx_0_0 00:08:29.443 valid_lft forever preferred_lft forever 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:29.443 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:29.443 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:29.443 altname enp217s0f1np1 00:08:29.443 altname ens818f1np1 00:08:29.443 inet 192.168.100.9/24 scope global mlx_0_1 00:08:29.443 valid_lft forever preferred_lft forever 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:29.443 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:29.444 192.168.100.9' 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:29.444 192.168.100.9' 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:29.444 192.168.100.9' 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3181340 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3181340 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3181340 ']' 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.444 05:25:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 [2024-11-27 05:25:25.939571] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:29.444 [2024-11-27 05:25:25.939673] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.703 [2024-11-27 05:25:26.091920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:29.703 [2024-11-27 05:25:26.186253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.703 [2024-11-27 05:25:26.186305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.703 [2024-11-27 05:25:26.186318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.703 [2024-11-27 05:25:26.186347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.703 [2024-11-27 05:25:26.186358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.703 [2024-11-27 05:25:26.188570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.703 [2024-11-27 05:25:26.188580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.271 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.271 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:30.271 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.271 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.271 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.271 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.271 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:30.271 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.271 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.271 [2024-11-27 05:25:26.802963] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7fe8389bd940) succeed. 00:08:30.271 [2024-11-27 05:25:26.812136] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7fe838979940) succeed. 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 [2024-11-27 05:25:26.967740] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 NULL1 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 Delay0 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3181621 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:30.530 05:25:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:30.789 [2024-11-27 05:25:27.125849] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:32.694 05:25:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.694 05:25:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.694 05:25:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.631 NVMe io qpair process completion error 00:08:33.631 NVMe io qpair process completion error 00:08:33.631 NVMe io qpair process completion error 00:08:33.890 NVMe io qpair process completion error 00:08:33.890 NVMe io qpair process completion error 00:08:33.890 NVMe io qpair process completion error 00:08:33.890 NVMe io qpair process completion error 00:08:33.890 05:25:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.890 05:25:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:33.890 05:25:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3181621 00:08:33.890 05:25:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:34.150 05:25:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:34.150 05:25:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3181621 00:08:34.150 05:25:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Write completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.717 Read completed with error (sct=0, sc=8) 00:08:34.717 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 starting I/O failed: -6 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Write completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.718 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 Write completed with error (sct=0, sc=8) 00:08:34.719 Read completed with error (sct=0, sc=8) 00:08:34.719 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:34.719 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3181621 00:08:34.719 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:34.719 Initializing NVMe Controllers 00:08:34.719 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:34.719 Controller IO queue size 128, less than required. 00:08:34.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:34.719 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:34.719 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:34.719 Initialization complete. Launching workers. 00:08:34.719 ======================================================== 00:08:34.719 Latency(us) 00:08:34.719 Device Information : IOPS MiB/s Average min max 00:08:34.719 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.47 0.04 1593993.63 1000162.42 2976840.14 00:08:34.719 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.47 0.04 1595946.51 1001139.08 2978544.66 00:08:34.719 ======================================================== 00:08:34.719 Total : 160.93 0.08 1594970.07 1000162.42 2978544.66 00:08:34.719 00:08:34.719 [2024-11-27 05:25:31.263705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:34.719 [2024-11-27 05:25:31.263772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:08:34.719 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3181621 00:08:35.285 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3181621) - No such process 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3181621 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3181621 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3181621 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.285 [2024-11-27 05:25:31.763345] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.285 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.286 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3182433 00:08:35.286 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:35.286 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:35.286 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:35.286 05:25:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:35.544 [2024-11-27 05:25:31.900087] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:35.801 05:25:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:35.801 05:25:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:35.801 05:25:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.368 05:25:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:36.368 05:25:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:36.368 05:25:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.933 05:25:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:36.933 05:25:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:36.933 05:25:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:37.501 05:25:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:37.501 05:25:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:37.501 05:25:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:37.760 05:25:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:37.760 05:25:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:37.760 05:25:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:38.328 05:25:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:38.328 05:25:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:38.328 05:25:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:38.897 05:25:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:38.897 05:25:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:38.897 05:25:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:39.466 05:25:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:39.466 05:25:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:39.466 05:25:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:40.035 05:25:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:40.035 05:25:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:40.035 05:25:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:40.294 05:25:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:40.294 05:25:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:40.294 05:25:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:40.863 05:25:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:40.863 05:25:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:40.863 05:25:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:41.432 05:25:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.432 05:25:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:41.432 05:25:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.001 05:25:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.001 05:25:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:42.001 05:25:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.570 05:25:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.570 05:25:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:42.570 05:25:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.570 Initializing NVMe Controllers 00:08:42.570 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:42.570 Controller IO queue size 128, less than required. 00:08:42.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:42.570 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:42.570 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:42.570 Initialization complete. Launching workers. 00:08:42.570 ======================================================== 00:08:42.570 Latency(us) 00:08:42.570 Device Information : IOPS MiB/s Average min max 00:08:42.570 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001489.28 1000060.42 1004284.76 00:08:42.570 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002868.86 1000074.52 1007436.62 00:08:42.570 ======================================================== 00:08:42.570 Total : 256.00 0.12 1002179.07 1000060.42 1007436.62 00:08:42.570 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3182433 00:08:42.828 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3182433) - No such process 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3182433 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:42.828 rmmod nvme_rdma 00:08:42.828 rmmod nvme_fabrics 00:08:42.828 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3181340 ']' 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3181340 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3181340 ']' 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3181340 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181340 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181340' 00:08:43.086 killing process with pid 3181340 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3181340 00:08:43.086 05:25:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3181340 00:08:44.465 05:25:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.465 05:25:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:44.465 00:08:44.465 real 0m23.465s 00:08:44.465 user 0m52.621s 00:08:44.465 sys 0m7.719s 00:08:44.465 05:25:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.465 05:25:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.465 ************************************ 00:08:44.465 END TEST nvmf_delete_subsystem 00:08:44.465 ************************************ 00:08:44.465 05:25:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:44.465 05:25:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.465 05:25:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.465 05:25:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.465 ************************************ 00:08:44.465 START TEST nvmf_host_management 00:08:44.465 ************************************ 00:08:44.465 05:25:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:44.726 * Looking for test storage... 00:08:44.726 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.726 --rc genhtml_branch_coverage=1 00:08:44.726 --rc genhtml_function_coverage=1 00:08:44.726 --rc genhtml_legend=1 00:08:44.726 --rc geninfo_all_blocks=1 00:08:44.726 --rc geninfo_unexecuted_blocks=1 00:08:44.726 00:08:44.726 ' 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.726 --rc genhtml_branch_coverage=1 00:08:44.726 --rc genhtml_function_coverage=1 00:08:44.726 --rc genhtml_legend=1 00:08:44.726 --rc geninfo_all_blocks=1 00:08:44.726 --rc geninfo_unexecuted_blocks=1 00:08:44.726 00:08:44.726 ' 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.726 --rc genhtml_branch_coverage=1 00:08:44.726 --rc genhtml_function_coverage=1 00:08:44.726 --rc genhtml_legend=1 00:08:44.726 --rc geninfo_all_blocks=1 00:08:44.726 --rc geninfo_unexecuted_blocks=1 00:08:44.726 00:08:44.726 ' 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.726 --rc genhtml_branch_coverage=1 00:08:44.726 --rc genhtml_function_coverage=1 00:08:44.726 --rc genhtml_legend=1 00:08:44.726 --rc geninfo_all_blocks=1 00:08:44.726 --rc geninfo_unexecuted_blocks=1 00:08:44.726 00:08:44.726 ' 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.726 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.727 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.727 05:25:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.717 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.717 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:54.717 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:54.717 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:54.717 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:54.717 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:54.717 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:54.717 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:54.718 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:54.718 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:54.718 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:54.718 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:54.718 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:54.719 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.719 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:54.719 altname enp217s0f0np0 00:08:54.719 altname ens818f0np0 00:08:54.719 inet 192.168.100.8/24 scope global mlx_0_0 00:08:54.719 valid_lft forever preferred_lft forever 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:54.719 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.719 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:54.719 altname enp217s0f1np1 00:08:54.719 altname ens818f1np1 00:08:54.719 inet 192.168.100.9/24 scope global mlx_0_1 00:08:54.719 valid_lft forever preferred_lft forever 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:54.719 192.168.100.9' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:54.719 192.168.100.9' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:54.719 192.168.100.9' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3188114 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3188114 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3188114 ']' 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.719 05:25:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.719 [2024-11-27 05:25:49.944485] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:54.719 [2024-11-27 05:25:49.944582] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.719 [2024-11-27 05:25:50.105945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.719 [2024-11-27 05:25:50.211031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.719 [2024-11-27 05:25:50.211088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.719 [2024-11-27 05:25:50.211101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.719 [2024-11-27 05:25:50.211115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.719 [2024-11-27 05:25:50.211126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.719 [2024-11-27 05:25:50.213809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.719 [2024-11-27 05:25:50.213875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.719 [2024-11-27 05:25:50.213996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.719 [2024-11-27 05:25:50.214022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:54.719 05:25:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.719 05:25:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:54.719 05:25:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.719 05:25:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.719 05:25:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.719 05:25:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.719 05:25:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:54.719 05:25:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.719 05:25:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.719 [2024-11-27 05:25:50.852842] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fae0495a940) succeed. 00:08:54.719 [2024-11-27 05:25:50.862575] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fae04914940) succeed. 00:08:54.719 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.720 Malloc0 00:08:54.720 [2024-11-27 05:25:51.229424] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3188369 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3188369 /var/tmp/bdevperf.sock 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3188369 ']' 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:54.720 { 00:08:54.720 "params": { 00:08:54.720 "name": "Nvme$subsystem", 00:08:54.720 "trtype": "$TEST_TRANSPORT", 00:08:54.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.720 "adrfam": "ipv4", 00:08:54.720 "trsvcid": "$NVMF_PORT", 00:08:54.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.720 "hdgst": ${hdgst:-false}, 00:08:54.720 "ddgst": ${ddgst:-false} 00:08:54.720 }, 00:08:54.720 "method": "bdev_nvme_attach_controller" 00:08:54.720 } 00:08:54.720 EOF 00:08:54.720 )") 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:54.720 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:54.980 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:54.980 05:25:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:54.980 "params": { 00:08:54.980 "name": "Nvme0", 00:08:54.980 "trtype": "rdma", 00:08:54.980 "traddr": "192.168.100.8", 00:08:54.980 "adrfam": "ipv4", 00:08:54.980 "trsvcid": "4420", 00:08:54.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.980 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:54.980 "hdgst": false, 00:08:54.980 "ddgst": false 00:08:54.980 }, 00:08:54.980 "method": "bdev_nvme_attach_controller" 00:08:54.980 }' 00:08:54.980 [2024-11-27 05:25:51.371879] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:54.980 [2024-11-27 05:25:51.371970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188369 ] 00:08:54.980 [2024-11-27 05:25:51.528586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.239 [2024-11-27 05:25:51.631271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.500 Running I/O for 10 seconds... 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=435 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 435 -ge 100 ']' 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.761 05:25:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:56.704 528.00 IOPS, 33.00 MiB/s [2024-11-27T04:25:53.291Z] [2024-11-27 05:25:53.258505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf240 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf180 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf0c0 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f000 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8ef40 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7ee80 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6edc0 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5ed00 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4ec40 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3eb80 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2eac0 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1ea00 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0e940 len:0x10000 key:0x181a00 00:08:56.704 [2024-11-27 05:25:53.258929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000feffc0 len:0x10000 key:0x181800 00:08:56.704 [2024-11-27 05:25:53.258954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff00 len:0x10000 key:0x181800 00:08:56.704 [2024-11-27 05:25:53.258980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.258994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000beffc0 len:0x10000 key:0x181900 00:08:56.704 [2024-11-27 05:25:53.259006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.259020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000af3e000 len:0x10000 key:0x182900 00:08:56.704 [2024-11-27 05:25:53.259032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.259046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000af1d000 len:0x10000 key:0x182900 00:08:56.704 [2024-11-27 05:25:53.259058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.259072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aefc000 len:0x10000 key:0x182900 00:08:56.704 [2024-11-27 05:25:53.259083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.259097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aedb000 len:0x10000 key:0x182900 00:08:56.704 [2024-11-27 05:25:53.259109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.259123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aeba000 len:0x10000 key:0x182900 00:08:56.704 [2024-11-27 05:25:53.259135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.259148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ae99000 len:0x10000 key:0x182900 00:08:56.704 [2024-11-27 05:25:53.259160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.259175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ae78000 len:0x10000 key:0x182900 00:08:56.704 [2024-11-27 05:25:53.259189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.704 [2024-11-27 05:25:53.259203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ae57000 len:0x10000 key:0x182900 00:08:56.704 [2024-11-27 05:25:53.259215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ae36000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ae15000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000adf4000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000add3000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000adb2000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ad91000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ad70000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ad4f000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b14e000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b12d000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b10c000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b0eb000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b0ca000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b0a9000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b088000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b067000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b046000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b025000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b004000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000afe3000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000afc2000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000afa1000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000af80000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000af5f000 len:0x10000 key:0x182900 00:08:56.705 [2024-11-27 05:25:53.259853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff00 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.259878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcfe40 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.259904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfd80 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.259929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafcc0 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.259955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fc00 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.259980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.259993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fb40 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.260005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.260018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fa80 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.260031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.260044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6f9c0 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.260056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.260069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5f900 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.260081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.260096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4f840 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.260108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.260122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3f780 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.260133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.260147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2f6c0 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.260158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.705 [2024-11-27 05:25:53.260172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f600 len:0x10000 key:0x181900 00:08:56.705 [2024-11-27 05:25:53.260184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.706 [2024-11-27 05:25:53.260197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f540 len:0x10000 key:0x181900 00:08:56.706 [2024-11-27 05:25:53.260209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.706 [2024-11-27 05:25:53.260222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff480 len:0x10000 key:0x181900 00:08:56.706 [2024-11-27 05:25:53.260234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.706 [2024-11-27 05:25:53.260247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef3c0 len:0x10000 key:0x181900 00:08:56.706 [2024-11-27 05:25:53.260260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.706 [2024-11-27 05:25:53.263581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:56.706 task offset: 71680 on job bdev=Nvme0n1 fails 00:08:56.706 00:08:56.706 Latency(us) 00:08:56.706 [2024-11-27T04:25:53.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.706 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:56.706 Job: Nvme0n1 ended in about 1.22 seconds with error 00:08:56.706 Verification LBA range: start 0x0 length 0x400 00:08:56.706 Nvme0n1 : 1.22 432.54 27.03 52.43 0.00 130740.18 2411.72 1020054.73 00:08:56.706 [2024-11-27T04:25:53.293Z] =================================================================================================================== 00:08:56.706 [2024-11-27T04:25:53.293Z] Total : 432.54 27.03 52.43 0.00 130740.18 2411.72 1020054.73 00:08:56.706 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3188369 00:08:56.706 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:56.706 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:56.706 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:56.706 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:56.706 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:56.706 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:56.706 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:56.706 { 00:08:56.706 "params": { 00:08:56.706 "name": "Nvme$subsystem", 00:08:56.706 "trtype": "$TEST_TRANSPORT", 00:08:56.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.706 "adrfam": "ipv4", 00:08:56.706 "trsvcid": "$NVMF_PORT", 00:08:56.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.706 "hdgst": ${hdgst:-false}, 00:08:56.706 "ddgst": ${ddgst:-false} 00:08:56.706 }, 00:08:56.706 "method": "bdev_nvme_attach_controller" 00:08:56.706 } 00:08:56.706 EOF 00:08:56.706 )") 00:08:56.706 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:56.706 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:56.966 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:56.966 05:25:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:56.966 "params": { 00:08:56.966 "name": "Nvme0", 00:08:56.966 "trtype": "rdma", 00:08:56.966 "traddr": "192.168.100.8", 00:08:56.966 "adrfam": "ipv4", 00:08:56.966 "trsvcid": "4420", 00:08:56.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:56.966 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:56.966 "hdgst": false, 00:08:56.966 "ddgst": false 00:08:56.966 }, 00:08:56.966 "method": "bdev_nvme_attach_controller" 00:08:56.966 }' 00:08:56.966 [2024-11-27 05:25:53.356041] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:56.966 [2024-11-27 05:25:53.356130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188811 ] 00:08:56.966 [2024-11-27 05:25:53.509872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.298 [2024-11-27 05:25:53.613232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.619 Running I/O for 1 seconds... 00:08:58.659 2688.00 IOPS, 168.00 MiB/s 00:08:58.659 Latency(us) 00:08:58.659 [2024-11-27T04:25:55.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.659 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:58.659 Verification LBA range: start 0x0 length 0x400 00:08:58.659 Nvme0n1 : 1.01 2734.54 170.91 0.00 0.00 22921.52 1297.61 46976.20 00:08:58.659 [2024-11-27T04:25:55.246Z] =================================================================================================================== 00:08:58.659 [2024-11-27T04:25:55.246Z] Total : 2734.54 170.91 0.00 0.00 22921.52 1297.61 46976.20 00:08:59.598 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3188369 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:59.598 rmmod nvme_rdma 00:08:59.598 rmmod nvme_fabrics 00:08:59.598 05:25:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3188114 ']' 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3188114 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3188114 ']' 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3188114 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3188114 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3188114' 00:08:59.598 killing process with pid 3188114 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3188114 00:08:59.598 05:25:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3188114 00:09:01.504 [2024-11-27 05:25:57.832020] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:01.504 05:25:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.504 05:25:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:01.504 05:25:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:01.504 00:09:01.504 real 0m16.952s 00:09:01.504 user 0m35.733s 00:09:01.504 sys 0m8.177s 00:09:01.504 05:25:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.504 05:25:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.504 ************************************ 00:09:01.504 END TEST nvmf_host_management 00:09:01.504 ************************************ 00:09:01.504 05:25:57 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:01.504 05:25:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.504 05:25:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.504 05:25:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.504 ************************************ 00:09:01.504 START TEST nvmf_lvol 00:09:01.504 ************************************ 00:09:01.504 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:01.763 * Looking for test storage... 00:09:01.763 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.763 --rc genhtml_branch_coverage=1 00:09:01.763 --rc genhtml_function_coverage=1 00:09:01.763 --rc genhtml_legend=1 00:09:01.763 --rc geninfo_all_blocks=1 00:09:01.763 --rc geninfo_unexecuted_blocks=1 00:09:01.763 00:09:01.763 ' 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.763 --rc genhtml_branch_coverage=1 00:09:01.763 --rc genhtml_function_coverage=1 00:09:01.763 --rc genhtml_legend=1 00:09:01.763 --rc geninfo_all_blocks=1 00:09:01.763 --rc geninfo_unexecuted_blocks=1 00:09:01.763 00:09:01.763 ' 00:09:01.763 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.763 --rc genhtml_branch_coverage=1 00:09:01.764 --rc genhtml_function_coverage=1 00:09:01.764 --rc genhtml_legend=1 00:09:01.764 --rc geninfo_all_blocks=1 00:09:01.764 --rc geninfo_unexecuted_blocks=1 00:09:01.764 00:09:01.764 ' 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.764 --rc genhtml_branch_coverage=1 00:09:01.764 --rc genhtml_function_coverage=1 00:09:01.764 --rc genhtml_legend=1 00:09:01.764 --rc geninfo_all_blocks=1 00:09:01.764 --rc geninfo_unexecuted_blocks=1 00:09:01.764 00:09:01.764 ' 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.764 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.764 05:25:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:11.748 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:11.748 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:11.748 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:11.748 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:11.748 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:11.749 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:11.749 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:11.749 altname enp217s0f0np0 00:09:11.749 altname ens818f0np0 00:09:11.749 inet 192.168.100.8/24 scope global mlx_0_0 00:09:11.749 valid_lft forever preferred_lft forever 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:11.749 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:11.749 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:11.749 altname enp217s0f1np1 00:09:11.749 altname ens818f1np1 00:09:11.749 inet 192.168.100.9/24 scope global mlx_0_1 00:09:11.749 valid_lft forever preferred_lft forever 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:11.749 05:26:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:11.749 192.168.100.9' 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:11.749 192.168.100.9' 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:11.749 192.168.100.9' 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3193804 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3193804 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3193804 ']' 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:11.749 [2024-11-27 05:26:07.183599] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:11.749 [2024-11-27 05:26:07.183709] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.749 [2024-11-27 05:26:07.338292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:11.749 [2024-11-27 05:26:07.434984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.749 [2024-11-27 05:26:07.435036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.749 [2024-11-27 05:26:07.435050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.749 [2024-11-27 05:26:07.435064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.749 [2024-11-27 05:26:07.435073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.749 [2024-11-27 05:26:07.437490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.749 [2024-11-27 05:26:07.437561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.749 [2024-11-27 05:26:07.437565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:11.749 05:26:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:11.749 05:26:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.750 05:26:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:11.750 [2024-11-27 05:26:08.222541] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f24eb52f940) succeed. 00:09:11.750 [2024-11-27 05:26:08.232058] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f24eb3bd940) succeed. 00:09:12.009 05:26:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:12.269 05:26:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:12.269 05:26:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:12.527 05:26:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:12.528 05:26:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:12.785 05:26:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:12.785 05:26:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3277dcd5-edfb-4e82-a28a-d6214609fe12 00:09:12.785 05:26:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3277dcd5-edfb-4e82-a28a-d6214609fe12 lvol 20 00:09:13.043 05:26:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=04971de6-3b76-4d06-92b3-901a9c6bfb50 00:09:13.043 05:26:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:13.301 05:26:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 04971de6-3b76-4d06-92b3-901a9c6bfb50 00:09:13.560 05:26:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:13.560 [2024-11-27 05:26:10.140351] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:13.819 05:26:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:13.819 05:26:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3194376 00:09:13.819 05:26:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:13.819 05:26:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:15.198 05:26:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 04971de6-3b76-4d06-92b3-901a9c6bfb50 MY_SNAPSHOT 00:09:15.198 05:26:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b1bbce0f-4126-43fb-9ea1-e454c6a966bc 00:09:15.198 05:26:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 04971de6-3b76-4d06-92b3-901a9c6bfb50 30 00:09:15.457 05:26:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b1bbce0f-4126-43fb-9ea1-e454c6a966bc MY_CLONE 00:09:15.457 05:26:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b548f349-df9d-4fad-8e38-dab9acc8a9f4 00:09:15.457 05:26:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b548f349-df9d-4fad-8e38-dab9acc8a9f4 00:09:16.026 05:26:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3194376 00:09:26.006 Initializing NVMe Controllers 00:09:26.006 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:26.006 Controller IO queue size 128, less than required. 00:09:26.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:26.006 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:26.006 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:26.006 Initialization complete. Launching workers. 00:09:26.006 ======================================================== 00:09:26.006 Latency(us) 00:09:26.006 Device Information : IOPS MiB/s Average min max 00:09:26.006 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15239.30 59.53 8400.16 3466.45 119664.08 00:09:26.006 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15081.30 58.91 8487.59 150.96 104819.36 00:09:26.006 ======================================================== 00:09:26.006 Total : 30320.60 118.44 8443.65 150.96 119664.08 00:09:26.006 00:09:26.006 05:26:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 04971de6-3b76-4d06-92b3-901a9c6bfb50 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3277dcd5-edfb-4e82-a28a-d6214609fe12 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:26.007 rmmod nvme_rdma 00:09:26.007 rmmod nvme_fabrics 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3193804 ']' 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3193804 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3193804 ']' 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3193804 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193804 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193804' 00:09:26.007 killing process with pid 3193804 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3193804 00:09:26.007 05:26:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3193804 00:09:27.909 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.909 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:27.909 00:09:27.909 real 0m26.451s 00:09:27.909 user 1m17.400s 00:09:27.909 sys 0m8.276s 00:09:27.909 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.909 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:27.909 ************************************ 00:09:27.909 END TEST nvmf_lvol 00:09:27.909 ************************************ 00:09:28.169 05:26:24 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.170 ************************************ 00:09:28.170 START TEST nvmf_lvs_grow 00:09:28.170 ************************************ 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:28.170 * Looking for test storage... 00:09:28.170 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.170 --rc genhtml_branch_coverage=1 00:09:28.170 --rc genhtml_function_coverage=1 00:09:28.170 --rc genhtml_legend=1 00:09:28.170 --rc geninfo_all_blocks=1 00:09:28.170 --rc geninfo_unexecuted_blocks=1 00:09:28.170 00:09:28.170 ' 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.170 --rc genhtml_branch_coverage=1 00:09:28.170 --rc genhtml_function_coverage=1 00:09:28.170 --rc genhtml_legend=1 00:09:28.170 --rc geninfo_all_blocks=1 00:09:28.170 --rc geninfo_unexecuted_blocks=1 00:09:28.170 00:09:28.170 ' 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.170 --rc genhtml_branch_coverage=1 00:09:28.170 --rc genhtml_function_coverage=1 00:09:28.170 --rc genhtml_legend=1 00:09:28.170 --rc geninfo_all_blocks=1 00:09:28.170 --rc geninfo_unexecuted_blocks=1 00:09:28.170 00:09:28.170 ' 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.170 --rc genhtml_branch_coverage=1 00:09:28.170 --rc genhtml_function_coverage=1 00:09:28.170 --rc genhtml_legend=1 00:09:28.170 --rc geninfo_all_blocks=1 00:09:28.170 --rc geninfo_unexecuted_blocks=1 00:09:28.170 00:09:28.170 ' 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.170 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.171 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:28.171 05:26:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:38.155 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:38.155 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:38.155 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:38.155 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:38.155 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:38.155 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:38.155 altname enp217s0f0np0 00:09:38.155 altname ens818f0np0 00:09:38.155 inet 192.168.100.8/24 scope global mlx_0_0 00:09:38.155 valid_lft forever preferred_lft forever 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:38.155 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:38.155 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:38.155 altname enp217s0f1np1 00:09:38.155 altname ens818f1np1 00:09:38.155 inet 192.168.100.9/24 scope global mlx_0_1 00:09:38.155 valid_lft forever preferred_lft forever 00:09:38.155 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:38.156 192.168.100.9' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:38.156 192.168.100.9' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:38.156 192.168.100.9' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3200961 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3200961 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3200961 ']' 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.156 05:26:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.156 [2024-11-27 05:26:33.813504] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:38.156 [2024-11-27 05:26:33.813602] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.156 [2024-11-27 05:26:33.964483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.156 [2024-11-27 05:26:34.060094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.156 [2024-11-27 05:26:34.060142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.156 [2024-11-27 05:26:34.060154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.156 [2024-11-27 05:26:34.060184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.156 [2024-11-27 05:26:34.060194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.156 [2024-11-27 05:26:34.061543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.156 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.156 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:38.156 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:38.156 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.156 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.156 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.156 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:38.415 [2024-11-27 05:26:34.851352] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f1bd5fbd940) succeed. 00:09:38.415 [2024-11-27 05:26:34.860372] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f1bd5f79940) succeed. 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.415 ************************************ 00:09:38.415 START TEST lvs_grow_clean 00:09:38.415 ************************************ 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.415 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.672 05:26:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:38.672 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:38.672 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:38.930 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:38.930 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:38.930 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:39.189 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:39.189 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:39.189 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 lvol 150 00:09:39.189 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=34e3cd33-4fd7-4a77-ab1a-bc0cb1af0ab7 00:09:39.189 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:39.447 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:39.447 [2024-11-27 05:26:35.946723] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:39.447 [2024-11-27 05:26:35.946817] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:39.447 true 00:09:39.447 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:39.447 05:26:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:39.706 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:39.706 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:39.965 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 34e3cd33-4fd7-4a77-ab1a-bc0cb1af0ab7 00:09:39.965 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:40.223 [2024-11-27 05:26:36.693337] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:40.223 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:40.482 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:40.482 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3201538 00:09:40.482 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:40.482 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3201538 /var/tmp/bdevperf.sock 00:09:40.482 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3201538 ']' 00:09:40.482 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:40.482 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.482 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:40.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:40.482 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.482 05:26:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:40.482 [2024-11-27 05:26:36.963421] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:40.482 [2024-11-27 05:26:36.963510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201538 ] 00:09:40.740 [2024-11-27 05:26:37.116860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.740 [2024-11-27 05:26:37.216356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.307 05:26:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.307 05:26:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:41.307 05:26:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:41.566 Nvme0n1 00:09:41.566 05:26:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:41.825 [ 00:09:41.825 { 00:09:41.825 "name": "Nvme0n1", 00:09:41.825 "aliases": [ 00:09:41.825 "34e3cd33-4fd7-4a77-ab1a-bc0cb1af0ab7" 00:09:41.825 ], 00:09:41.825 "product_name": "NVMe disk", 00:09:41.825 "block_size": 4096, 00:09:41.825 "num_blocks": 38912, 00:09:41.825 "uuid": "34e3cd33-4fd7-4a77-ab1a-bc0cb1af0ab7", 00:09:41.825 "numa_id": 1, 00:09:41.825 "assigned_rate_limits": { 00:09:41.825 "rw_ios_per_sec": 0, 00:09:41.825 "rw_mbytes_per_sec": 0, 00:09:41.825 "r_mbytes_per_sec": 0, 00:09:41.825 "w_mbytes_per_sec": 0 00:09:41.825 }, 00:09:41.825 "claimed": false, 00:09:41.825 "zoned": false, 00:09:41.825 "supported_io_types": { 00:09:41.825 "read": true, 00:09:41.825 "write": true, 00:09:41.825 "unmap": true, 00:09:41.825 "flush": true, 00:09:41.825 "reset": true, 00:09:41.825 "nvme_admin": true, 00:09:41.825 "nvme_io": true, 00:09:41.825 "nvme_io_md": false, 00:09:41.825 "write_zeroes": true, 00:09:41.825 "zcopy": false, 00:09:41.825 "get_zone_info": false, 00:09:41.825 "zone_management": false, 00:09:41.825 "zone_append": false, 00:09:41.825 "compare": true, 00:09:41.825 "compare_and_write": true, 00:09:41.825 "abort": true, 00:09:41.825 "seek_hole": false, 00:09:41.825 "seek_data": false, 00:09:41.825 "copy": true, 00:09:41.825 "nvme_iov_md": false 00:09:41.825 }, 00:09:41.825 "memory_domains": [ 00:09:41.825 { 00:09:41.825 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:41.825 "dma_device_type": 0 00:09:41.825 } 00:09:41.825 ], 00:09:41.825 "driver_specific": { 00:09:41.825 "nvme": [ 00:09:41.825 { 00:09:41.825 "trid": { 00:09:41.825 "trtype": "RDMA", 00:09:41.825 "adrfam": "IPv4", 00:09:41.825 "traddr": "192.168.100.8", 00:09:41.825 "trsvcid": "4420", 00:09:41.825 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:41.825 }, 00:09:41.825 "ctrlr_data": { 00:09:41.825 "cntlid": 1, 00:09:41.825 "vendor_id": "0x8086", 00:09:41.825 "model_number": "SPDK bdev Controller", 00:09:41.825 "serial_number": "SPDK0", 00:09:41.825 "firmware_revision": "25.01", 00:09:41.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:41.825 "oacs": { 00:09:41.825 "security": 0, 00:09:41.825 "format": 0, 00:09:41.825 "firmware": 0, 00:09:41.825 "ns_manage": 0 00:09:41.825 }, 00:09:41.825 "multi_ctrlr": true, 00:09:41.825 "ana_reporting": false 00:09:41.825 }, 00:09:41.825 "vs": { 00:09:41.825 "nvme_version": "1.3" 00:09:41.825 }, 00:09:41.825 "ns_data": { 00:09:41.825 "id": 1, 00:09:41.825 "can_share": true 00:09:41.825 } 00:09:41.825 } 00:09:41.825 ], 00:09:41.825 "mp_policy": "active_passive" 00:09:41.825 } 00:09:41.825 } 00:09:41.825 ] 00:09:41.825 05:26:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3201807 00:09:41.825 05:26:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:41.825 05:26:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:41.825 Running I/O for 10 seconds... 00:09:42.761 Latency(us) 00:09:42.761 [2024-11-27T04:26:39.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.761 Nvme0n1 : 1.00 30273.00 118.25 0.00 0.00 0.00 0.00 0.00 00:09:42.761 [2024-11-27T04:26:39.348Z] =================================================================================================================== 00:09:42.761 [2024-11-27T04:26:39.348Z] Total : 30273.00 118.25 0.00 0.00 0.00 0.00 0.00 00:09:42.761 00:09:43.700 05:26:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:43.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.959 Nvme0n1 : 2.00 30400.50 118.75 0.00 0.00 0.00 0.00 0.00 00:09:43.959 [2024-11-27T04:26:40.546Z] =================================================================================================================== 00:09:43.959 [2024-11-27T04:26:40.546Z] Total : 30400.50 118.75 0.00 0.00 0.00 0.00 0.00 00:09:43.959 00:09:43.959 true 00:09:43.959 05:26:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:43.959 05:26:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:44.219 05:26:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:44.219 05:26:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:44.219 05:26:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3201807 00:09:44.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.787 Nvme0n1 : 3.00 30465.33 119.01 0.00 0.00 0.00 0.00 0.00 00:09:44.787 [2024-11-27T04:26:41.374Z] =================================================================================================================== 00:09:44.787 [2024-11-27T04:26:41.374Z] Total : 30465.33 119.01 0.00 0.00 0.00 0.00 0.00 00:09:44.787 00:09:46.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.167 Nvme0n1 : 4.00 30632.00 119.66 0.00 0.00 0.00 0.00 0.00 00:09:46.167 [2024-11-27T04:26:42.754Z] =================================================================================================================== 00:09:46.167 [2024-11-27T04:26:42.754Z] Total : 30632.00 119.66 0.00 0.00 0.00 0.00 0.00 00:09:46.167 00:09:47.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.104 Nvme0n1 : 5.00 30746.00 120.10 0.00 0.00 0.00 0.00 0.00 00:09:47.104 [2024-11-27T04:26:43.691Z] =================================================================================================================== 00:09:47.104 [2024-11-27T04:26:43.691Z] Total : 30746.00 120.10 0.00 0.00 0.00 0.00 0.00 00:09:47.104 00:09:48.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.043 Nvme0n1 : 6.00 30827.50 120.42 0.00 0.00 0.00 0.00 0.00 00:09:48.043 [2024-11-27T04:26:44.630Z] =================================================================================================================== 00:09:48.043 [2024-11-27T04:26:44.630Z] Total : 30827.50 120.42 0.00 0.00 0.00 0.00 0.00 00:09:48.043 00:09:48.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.979 Nvme0n1 : 7.00 30893.43 120.68 0.00 0.00 0.00 0.00 0.00 00:09:48.979 [2024-11-27T04:26:45.566Z] =================================================================================================================== 00:09:48.979 [2024-11-27T04:26:45.566Z] Total : 30893.43 120.68 0.00 0.00 0.00 0.00 0.00 00:09:48.979 00:09:49.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.915 Nvme0n1 : 8.00 30935.75 120.84 0.00 0.00 0.00 0.00 0.00 00:09:49.915 [2024-11-27T04:26:46.502Z] =================================================================================================================== 00:09:49.915 [2024-11-27T04:26:46.502Z] Total : 30935.75 120.84 0.00 0.00 0.00 0.00 0.00 00:09:49.915 00:09:50.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.852 Nvme0n1 : 9.00 30972.11 120.98 0.00 0.00 0.00 0.00 0.00 00:09:50.852 [2024-11-27T04:26:47.440Z] =================================================================================================================== 00:09:50.853 [2024-11-27T04:26:47.440Z] Total : 30972.11 120.98 0.00 0.00 0.00 0.00 0.00 00:09:50.853 00:09:51.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.790 Nvme0n1 : 10.00 31005.30 121.11 0.00 0.00 0.00 0.00 0.00 00:09:51.790 [2024-11-27T04:26:48.377Z] =================================================================================================================== 00:09:51.790 [2024-11-27T04:26:48.377Z] Total : 31005.30 121.11 0.00 0.00 0.00 0.00 0.00 00:09:51.790 00:09:51.790 00:09:51.790 Latency(us) 00:09:51.790 [2024-11-27T04:26:48.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.790 Nvme0n1 : 10.00 31005.98 121.12 0.00 0.00 4124.94 2988.44 10852.76 00:09:51.790 [2024-11-27T04:26:48.377Z] =================================================================================================================== 00:09:51.790 [2024-11-27T04:26:48.377Z] Total : 31005.98 121.12 0.00 0.00 4124.94 2988.44 10852.76 00:09:51.790 { 00:09:51.790 "results": [ 00:09:51.790 { 00:09:51.790 "job": "Nvme0n1", 00:09:51.790 "core_mask": "0x2", 00:09:51.790 "workload": "randwrite", 00:09:51.790 "status": "finished", 00:09:51.790 "queue_depth": 128, 00:09:51.790 "io_size": 4096, 00:09:51.790 "runtime": 10.00362, 00:09:51.790 "iops": 31005.975836747097, 00:09:51.790 "mibps": 121.11709311229335, 00:09:51.790 "io_failed": 0, 00:09:51.790 "io_timeout": 0, 00:09:51.790 "avg_latency_us": 4124.942672055505, 00:09:51.790 "min_latency_us": 2988.4416, 00:09:51.790 "max_latency_us": 10852.7616 00:09:51.790 } 00:09:51.790 ], 00:09:51.790 "core_count": 1 00:09:51.790 } 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3201538 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3201538 ']' 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3201538 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3201538 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3201538' 00:09:52.050 killing process with pid 3201538 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3201538 00:09:52.050 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.050 00:09:52.050 Latency(us) 00:09:52.050 [2024-11-27T04:26:48.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.050 [2024-11-27T04:26:48.637Z] =================================================================================================================== 00:09:52.050 [2024-11-27T04:26:48.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.050 05:26:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3201538 00:09:52.987 05:26:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:52.988 05:26:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:53.246 05:26:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:53.246 05:26:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:53.504 05:26:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:53.504 05:26:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:53.504 05:26:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:53.763 [2024-11-27 05:26:50.098985] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:53.763 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:53.763 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:53.764 request: 00:09:53.764 { 00:09:53.764 "uuid": "2665c9e5-ba43-427e-8cbb-13ade44e4694", 00:09:53.764 "method": "bdev_lvol_get_lvstores", 00:09:53.764 "req_id": 1 00:09:53.764 } 00:09:53.764 Got JSON-RPC error response 00:09:53.764 response: 00:09:53.764 { 00:09:53.764 "code": -19, 00:09:53.764 "message": "No such device" 00:09:53.764 } 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:53.764 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:54.023 aio_bdev 00:09:54.023 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 34e3cd33-4fd7-4a77-ab1a-bc0cb1af0ab7 00:09:54.023 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=34e3cd33-4fd7-4a77-ab1a-bc0cb1af0ab7 00:09:54.023 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.023 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:54.023 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.023 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.023 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:54.282 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 34e3cd33-4fd7-4a77-ab1a-bc0cb1af0ab7 -t 2000 00:09:54.540 [ 00:09:54.540 { 00:09:54.540 "name": "34e3cd33-4fd7-4a77-ab1a-bc0cb1af0ab7", 00:09:54.540 "aliases": [ 00:09:54.540 "lvs/lvol" 00:09:54.540 ], 00:09:54.540 "product_name": "Logical Volume", 00:09:54.540 "block_size": 4096, 00:09:54.540 "num_blocks": 38912, 00:09:54.540 "uuid": "34e3cd33-4fd7-4a77-ab1a-bc0cb1af0ab7", 00:09:54.540 "assigned_rate_limits": { 00:09:54.540 "rw_ios_per_sec": 0, 00:09:54.540 "rw_mbytes_per_sec": 0, 00:09:54.540 "r_mbytes_per_sec": 0, 00:09:54.540 "w_mbytes_per_sec": 0 00:09:54.540 }, 00:09:54.540 "claimed": false, 00:09:54.540 "zoned": false, 00:09:54.540 "supported_io_types": { 00:09:54.540 "read": true, 00:09:54.540 "write": true, 00:09:54.540 "unmap": true, 00:09:54.540 "flush": false, 00:09:54.540 "reset": true, 00:09:54.540 "nvme_admin": false, 00:09:54.540 "nvme_io": false, 00:09:54.540 "nvme_io_md": false, 00:09:54.540 "write_zeroes": true, 00:09:54.540 "zcopy": false, 00:09:54.540 "get_zone_info": false, 00:09:54.540 "zone_management": false, 00:09:54.540 "zone_append": false, 00:09:54.540 "compare": false, 00:09:54.540 "compare_and_write": false, 00:09:54.540 "abort": false, 00:09:54.540 "seek_hole": true, 00:09:54.540 "seek_data": true, 00:09:54.540 "copy": false, 00:09:54.540 "nvme_iov_md": false 00:09:54.540 }, 00:09:54.540 "driver_specific": { 00:09:54.540 "lvol": { 00:09:54.541 "lvol_store_uuid": "2665c9e5-ba43-427e-8cbb-13ade44e4694", 00:09:54.541 "base_bdev": "aio_bdev", 00:09:54.541 "thin_provision": false, 00:09:54.541 "num_allocated_clusters": 38, 00:09:54.541 "snapshot": false, 00:09:54.541 "clone": false, 00:09:54.541 "esnap_clone": false 00:09:54.541 } 00:09:54.541 } 00:09:54.541 } 00:09:54.541 ] 00:09:54.541 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:54.541 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:54.541 05:26:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:54.541 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:54.541 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:54.541 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:54.800 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:54.800 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 34e3cd33-4fd7-4a77-ab1a-bc0cb1af0ab7 00:09:55.058 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2665c9e5-ba43-427e-8cbb-13ade44e4694 00:09:55.316 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:55.316 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:55.316 00:09:55.316 real 0m16.912s 00:09:55.316 user 0m16.728s 00:09:55.316 sys 0m1.360s 00:09:55.316 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.316 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:55.575 ************************************ 00:09:55.575 END TEST lvs_grow_clean 00:09:55.575 ************************************ 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:55.575 ************************************ 00:09:55.575 START TEST lvs_grow_dirty 00:09:55.575 ************************************ 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:55.575 05:26:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:55.575 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:55.834 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:55.835 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:55.835 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:09:55.835 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:09:55.835 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:56.094 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:56.094 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:56.094 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf lvol 150 00:09:56.354 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1a0287ba-2ed4-467a-85fa-17a127a64c5e 00:09:56.354 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:56.354 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:56.354 [2024-11-27 05:26:52.935295] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:56.354 [2024-11-27 05:26:52.935387] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:56.354 true 00:09:56.612 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:09:56.612 05:26:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:56.612 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:56.612 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:56.871 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a0287ba-2ed4-467a-85fa-17a127a64c5e 00:09:57.131 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:57.131 [2024-11-27 05:26:53.677840] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:57.131 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:57.398 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:57.398 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3204538 00:09:57.398 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:57.398 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3204538 /var/tmp/bdevperf.sock 00:09:57.398 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3204538 ']' 00:09:57.398 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:57.398 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.398 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:57.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:57.398 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.398 05:26:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:57.398 [2024-11-27 05:26:53.936396] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:57.398 [2024-11-27 05:26:53.936489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204538 ] 00:09:57.657 [2024-11-27 05:26:54.090129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.657 [2024-11-27 05:26:54.188645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.222 05:26:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.223 05:26:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:58.223 05:26:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:58.481 Nvme0n1 00:09:58.481 05:26:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:58.740 [ 00:09:58.740 { 00:09:58.740 "name": "Nvme0n1", 00:09:58.740 "aliases": [ 00:09:58.740 "1a0287ba-2ed4-467a-85fa-17a127a64c5e" 00:09:58.740 ], 00:09:58.740 "product_name": "NVMe disk", 00:09:58.740 "block_size": 4096, 00:09:58.740 "num_blocks": 38912, 00:09:58.740 "uuid": "1a0287ba-2ed4-467a-85fa-17a127a64c5e", 00:09:58.740 "numa_id": 1, 00:09:58.740 "assigned_rate_limits": { 00:09:58.740 "rw_ios_per_sec": 0, 00:09:58.740 "rw_mbytes_per_sec": 0, 00:09:58.740 "r_mbytes_per_sec": 0, 00:09:58.740 "w_mbytes_per_sec": 0 00:09:58.740 }, 00:09:58.740 "claimed": false, 00:09:58.740 "zoned": false, 00:09:58.740 "supported_io_types": { 00:09:58.740 "read": true, 00:09:58.740 "write": true, 00:09:58.740 "unmap": true, 00:09:58.740 "flush": true, 00:09:58.740 "reset": true, 00:09:58.740 "nvme_admin": true, 00:09:58.740 "nvme_io": true, 00:09:58.740 "nvme_io_md": false, 00:09:58.740 "write_zeroes": true, 00:09:58.740 "zcopy": false, 00:09:58.740 "get_zone_info": false, 00:09:58.740 "zone_management": false, 00:09:58.740 "zone_append": false, 00:09:58.740 "compare": true, 00:09:58.740 "compare_and_write": true, 00:09:58.740 "abort": true, 00:09:58.740 "seek_hole": false, 00:09:58.740 "seek_data": false, 00:09:58.740 "copy": true, 00:09:58.740 "nvme_iov_md": false 00:09:58.740 }, 00:09:58.740 "memory_domains": [ 00:09:58.740 { 00:09:58.740 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:58.740 "dma_device_type": 0 00:09:58.740 } 00:09:58.740 ], 00:09:58.740 "driver_specific": { 00:09:58.740 "nvme": [ 00:09:58.740 { 00:09:58.740 "trid": { 00:09:58.740 "trtype": "RDMA", 00:09:58.740 "adrfam": "IPv4", 00:09:58.740 "traddr": "192.168.100.8", 00:09:58.740 "trsvcid": "4420", 00:09:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:58.740 }, 00:09:58.740 "ctrlr_data": { 00:09:58.740 "cntlid": 1, 00:09:58.740 "vendor_id": "0x8086", 00:09:58.740 "model_number": "SPDK bdev Controller", 00:09:58.740 "serial_number": "SPDK0", 00:09:58.740 "firmware_revision": "25.01", 00:09:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:58.740 "oacs": { 00:09:58.740 "security": 0, 00:09:58.740 "format": 0, 00:09:58.740 "firmware": 0, 00:09:58.740 "ns_manage": 0 00:09:58.740 }, 00:09:58.740 "multi_ctrlr": true, 00:09:58.740 "ana_reporting": false 00:09:58.740 }, 00:09:58.740 "vs": { 00:09:58.740 "nvme_version": "1.3" 00:09:58.740 }, 00:09:58.740 "ns_data": { 00:09:58.740 "id": 1, 00:09:58.740 "can_share": true 00:09:58.740 } 00:09:58.740 } 00:09:58.740 ], 00:09:58.740 "mp_policy": "active_passive" 00:09:58.740 } 00:09:58.740 } 00:09:58.740 ] 00:09:58.740 05:26:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3204808 00:09:58.740 05:26:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:58.740 05:26:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:58.740 Running I/O for 10 seconds... 00:10:00.115 Latency(us) 00:10:00.115 [2024-11-27T04:26:56.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.115 Nvme0n1 : 1.00 30049.00 117.38 0.00 0.00 0.00 0.00 0.00 00:10:00.115 [2024-11-27T04:26:56.702Z] =================================================================================================================== 00:10:00.115 [2024-11-27T04:26:56.702Z] Total : 30049.00 117.38 0.00 0.00 0.00 0.00 0.00 00:10:00.115 00:10:00.681 05:26:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:00.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.940 Nvme0n1 : 2.00 30417.00 118.82 0.00 0.00 0.00 0.00 0.00 00:10:00.940 [2024-11-27T04:26:57.527Z] =================================================================================================================== 00:10:00.940 [2024-11-27T04:26:57.527Z] Total : 30417.00 118.82 0.00 0.00 0.00 0.00 0.00 00:10:00.940 00:10:00.940 true 00:10:00.940 05:26:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:00.940 05:26:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:01.199 05:26:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:01.199 05:26:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:01.199 05:26:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3204808 00:10:01.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.766 Nvme0n1 : 3.00 30486.33 119.09 0.00 0.00 0.00 0.00 0.00 00:10:01.766 [2024-11-27T04:26:58.353Z] =================================================================================================================== 00:10:01.766 [2024-11-27T04:26:58.353Z] Total : 30486.33 119.09 0.00 0.00 0.00 0.00 0.00 00:10:01.766 00:10:03.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.144 Nvme0n1 : 4.00 30616.50 119.60 0.00 0.00 0.00 0.00 0.00 00:10:03.144 [2024-11-27T04:26:59.731Z] =================================================================================================================== 00:10:03.144 [2024-11-27T04:26:59.731Z] Total : 30616.50 119.60 0.00 0.00 0.00 0.00 0.00 00:10:03.144 00:10:04.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:04.079 Nvme0n1 : 5.00 30636.40 119.67 0.00 0.00 0.00 0.00 0.00 00:10:04.079 [2024-11-27T04:27:00.666Z] =================================================================================================================== 00:10:04.079 [2024-11-27T04:27:00.666Z] Total : 30636.40 119.67 0.00 0.00 0.00 0.00 0.00 00:10:04.079 00:10:05.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.015 Nvme0n1 : 6.00 30581.50 119.46 0.00 0.00 0.00 0.00 0.00 00:10:05.015 [2024-11-27T04:27:01.602Z] =================================================================================================================== 00:10:05.015 [2024-11-27T04:27:01.602Z] Total : 30581.50 119.46 0.00 0.00 0.00 0.00 0.00 00:10:05.015 00:10:05.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.949 Nvme0n1 : 7.00 30614.43 119.59 0.00 0.00 0.00 0.00 0.00 00:10:05.949 [2024-11-27T04:27:02.536Z] =================================================================================================================== 00:10:05.949 [2024-11-27T04:27:02.536Z] Total : 30614.43 119.59 0.00 0.00 0.00 0.00 0.00 00:10:05.949 00:10:07.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.032 Nvme0n1 : 8.00 30624.38 119.63 0.00 0.00 0.00 0.00 0.00 00:10:07.032 [2024-11-27T04:27:03.619Z] =================================================================================================================== 00:10:07.032 [2024-11-27T04:27:03.619Z] Total : 30624.38 119.63 0.00 0.00 0.00 0.00 0.00 00:10:07.032 00:10:08.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.066 Nvme0n1 : 9.00 30620.00 119.61 0.00 0.00 0.00 0.00 0.00 00:10:08.066 [2024-11-27T04:27:04.653Z] =================================================================================================================== 00:10:08.066 [2024-11-27T04:27:04.653Z] Total : 30620.00 119.61 0.00 0.00 0.00 0.00 0.00 00:10:08.066 00:10:09.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.004 Nvme0n1 : 10.00 30655.60 119.75 0.00 0.00 0.00 0.00 0.00 00:10:09.004 [2024-11-27T04:27:05.591Z] =================================================================================================================== 00:10:09.004 [2024-11-27T04:27:05.591Z] Total : 30655.60 119.75 0.00 0.00 0.00 0.00 0.00 00:10:09.004 00:10:09.004 00:10:09.004 Latency(us) 00:10:09.004 [2024-11-27T04:27:05.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.004 Nvme0n1 : 10.00 30655.05 119.75 0.00 0.00 4172.06 3040.87 13736.35 00:10:09.004 [2024-11-27T04:27:05.591Z] =================================================================================================================== 00:10:09.004 [2024-11-27T04:27:05.591Z] Total : 30655.05 119.75 0.00 0.00 4172.06 3040.87 13736.35 00:10:09.004 { 00:10:09.004 "results": [ 00:10:09.004 { 00:10:09.004 "job": "Nvme0n1", 00:10:09.004 "core_mask": "0x2", 00:10:09.004 "workload": "randwrite", 00:10:09.004 "status": "finished", 00:10:09.004 "queue_depth": 128, 00:10:09.004 "io_size": 4096, 00:10:09.004 "runtime": 10.003507, 00:10:09.004 "iops": 30655.04927421953, 00:10:09.004 "mibps": 119.74628622742004, 00:10:09.004 "io_failed": 0, 00:10:09.004 "io_timeout": 0, 00:10:09.004 "avg_latency_us": 4172.058030926961, 00:10:09.004 "min_latency_us": 3040.8704, 00:10:09.004 "max_latency_us": 13736.3456 00:10:09.004 } 00:10:09.004 ], 00:10:09.004 "core_count": 1 00:10:09.004 } 00:10:09.004 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3204538 00:10:09.004 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3204538 ']' 00:10:09.004 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3204538 00:10:09.005 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:09.005 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.005 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3204538 00:10:09.005 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:09.005 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:09.005 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3204538' 00:10:09.005 killing process with pid 3204538 00:10:09.005 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3204538 00:10:09.005 Received shutdown signal, test time was about 10.000000 seconds 00:10:09.005 00:10:09.005 Latency(us) 00:10:09.005 [2024-11-27T04:27:05.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.005 [2024-11-27T04:27:05.592Z] =================================================================================================================== 00:10:09.005 [2024-11-27T04:27:05.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:09.005 05:27:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3204538 00:10:09.943 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:09.943 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:10.202 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:10.202 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3200961 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3200961 00:10:10.462 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3200961 Killed "${NVMF_APP[@]}" "$@" 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3207355 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3207355 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3207355 ']' 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.462 05:27:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 [2024-11-27 05:27:07.051129] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:10.721 [2024-11-27 05:27:07.051225] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.721 [2024-11-27 05:27:07.210218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.721 [2024-11-27 05:27:07.304326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.721 [2024-11-27 05:27:07.304389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.721 [2024-11-27 05:27:07.304402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.721 [2024-11-27 05:27:07.304431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.721 [2024-11-27 05:27:07.304441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.721 [2024-11-27 05:27:07.305785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.292 05:27:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.292 05:27:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:11.292 05:27:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.292 05:27:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.292 05:27:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:11.551 05:27:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.551 05:27:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:11.551 [2024-11-27 05:27:08.136184] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:11.551 [2024-11-27 05:27:08.136352] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:11.551 [2024-11-27 05:27:08.136401] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:11.810 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:11.810 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1a0287ba-2ed4-467a-85fa-17a127a64c5e 00:10:11.810 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1a0287ba-2ed4-467a-85fa-17a127a64c5e 00:10:11.810 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.810 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:11.810 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.810 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.810 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:11.810 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a0287ba-2ed4-467a-85fa-17a127a64c5e -t 2000 00:10:12.070 [ 00:10:12.070 { 00:10:12.070 "name": "1a0287ba-2ed4-467a-85fa-17a127a64c5e", 00:10:12.070 "aliases": [ 00:10:12.070 "lvs/lvol" 00:10:12.070 ], 00:10:12.070 "product_name": "Logical Volume", 00:10:12.070 "block_size": 4096, 00:10:12.070 "num_blocks": 38912, 00:10:12.070 "uuid": "1a0287ba-2ed4-467a-85fa-17a127a64c5e", 00:10:12.070 "assigned_rate_limits": { 00:10:12.070 "rw_ios_per_sec": 0, 00:10:12.070 "rw_mbytes_per_sec": 0, 00:10:12.070 "r_mbytes_per_sec": 0, 00:10:12.070 "w_mbytes_per_sec": 0 00:10:12.070 }, 00:10:12.070 "claimed": false, 00:10:12.070 "zoned": false, 00:10:12.070 "supported_io_types": { 00:10:12.070 "read": true, 00:10:12.070 "write": true, 00:10:12.070 "unmap": true, 00:10:12.070 "flush": false, 00:10:12.070 "reset": true, 00:10:12.070 "nvme_admin": false, 00:10:12.070 "nvme_io": false, 00:10:12.070 "nvme_io_md": false, 00:10:12.070 "write_zeroes": true, 00:10:12.070 "zcopy": false, 00:10:12.070 "get_zone_info": false, 00:10:12.070 "zone_management": false, 00:10:12.070 "zone_append": false, 00:10:12.070 "compare": false, 00:10:12.070 "compare_and_write": false, 00:10:12.070 "abort": false, 00:10:12.070 "seek_hole": true, 00:10:12.070 "seek_data": true, 00:10:12.070 "copy": false, 00:10:12.070 "nvme_iov_md": false 00:10:12.070 }, 00:10:12.070 "driver_specific": { 00:10:12.070 "lvol": { 00:10:12.070 "lvol_store_uuid": "6abc3c8a-c280-4e11-9831-3c8b36e73aaf", 00:10:12.070 "base_bdev": "aio_bdev", 00:10:12.070 "thin_provision": false, 00:10:12.070 "num_allocated_clusters": 38, 00:10:12.070 "snapshot": false, 00:10:12.070 "clone": false, 00:10:12.070 "esnap_clone": false 00:10:12.070 } 00:10:12.070 } 00:10:12.070 } 00:10:12.070 ] 00:10:12.070 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:12.071 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:12.071 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:12.330 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:12.330 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:12.330 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:12.330 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:12.330 05:27:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:12.589 [2024-11-27 05:27:09.072374] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:12.589 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:12.862 request: 00:10:12.862 { 00:10:12.862 "uuid": "6abc3c8a-c280-4e11-9831-3c8b36e73aaf", 00:10:12.862 "method": "bdev_lvol_get_lvstores", 00:10:12.862 "req_id": 1 00:10:12.862 } 00:10:12.862 Got JSON-RPC error response 00:10:12.862 response: 00:10:12.862 { 00:10:12.862 "code": -19, 00:10:12.862 "message": "No such device" 00:10:12.862 } 00:10:12.862 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:12.862 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:12.862 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:12.862 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:12.862 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:13.121 aio_bdev 00:10:13.121 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1a0287ba-2ed4-467a-85fa-17a127a64c5e 00:10:13.121 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1a0287ba-2ed4-467a-85fa-17a127a64c5e 00:10:13.121 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.121 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:13.121 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.121 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.121 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:13.121 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a0287ba-2ed4-467a-85fa-17a127a64c5e -t 2000 00:10:13.381 [ 00:10:13.381 { 00:10:13.381 "name": "1a0287ba-2ed4-467a-85fa-17a127a64c5e", 00:10:13.381 "aliases": [ 00:10:13.381 "lvs/lvol" 00:10:13.381 ], 00:10:13.381 "product_name": "Logical Volume", 00:10:13.381 "block_size": 4096, 00:10:13.381 "num_blocks": 38912, 00:10:13.381 "uuid": "1a0287ba-2ed4-467a-85fa-17a127a64c5e", 00:10:13.381 "assigned_rate_limits": { 00:10:13.381 "rw_ios_per_sec": 0, 00:10:13.381 "rw_mbytes_per_sec": 0, 00:10:13.381 "r_mbytes_per_sec": 0, 00:10:13.381 "w_mbytes_per_sec": 0 00:10:13.381 }, 00:10:13.381 "claimed": false, 00:10:13.381 "zoned": false, 00:10:13.381 "supported_io_types": { 00:10:13.381 "read": true, 00:10:13.381 "write": true, 00:10:13.381 "unmap": true, 00:10:13.381 "flush": false, 00:10:13.381 "reset": true, 00:10:13.381 "nvme_admin": false, 00:10:13.381 "nvme_io": false, 00:10:13.381 "nvme_io_md": false, 00:10:13.381 "write_zeroes": true, 00:10:13.381 "zcopy": false, 00:10:13.381 "get_zone_info": false, 00:10:13.381 "zone_management": false, 00:10:13.381 "zone_append": false, 00:10:13.381 "compare": false, 00:10:13.381 "compare_and_write": false, 00:10:13.381 "abort": false, 00:10:13.381 "seek_hole": true, 00:10:13.381 "seek_data": true, 00:10:13.381 "copy": false, 00:10:13.381 "nvme_iov_md": false 00:10:13.381 }, 00:10:13.381 "driver_specific": { 00:10:13.381 "lvol": { 00:10:13.381 "lvol_store_uuid": "6abc3c8a-c280-4e11-9831-3c8b36e73aaf", 00:10:13.381 "base_bdev": "aio_bdev", 00:10:13.381 "thin_provision": false, 00:10:13.381 "num_allocated_clusters": 38, 00:10:13.381 "snapshot": false, 00:10:13.381 "clone": false, 00:10:13.381 "esnap_clone": false 00:10:13.381 } 00:10:13.381 } 00:10:13.381 } 00:10:13.381 ] 00:10:13.381 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:13.381 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:13.381 05:27:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:13.640 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:13.640 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:13.640 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:13.899 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:13.899 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1a0287ba-2ed4-467a-85fa-17a127a64c5e 00:10:13.899 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6abc3c8a-c280-4e11-9831-3c8b36e73aaf 00:10:14.158 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:14.417 00:10:14.417 real 0m18.874s 00:10:14.417 user 0m48.796s 00:10:14.417 sys 0m3.586s 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:14.417 ************************************ 00:10:14.417 END TEST lvs_grow_dirty 00:10:14.417 ************************************ 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:14.417 nvmf_trace.0 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.417 05:27:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:14.417 rmmod nvme_rdma 00:10:14.417 rmmod nvme_fabrics 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3207355 ']' 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3207355 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3207355 ']' 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3207355 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3207355 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3207355' 00:10:14.676 killing process with pid 3207355 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3207355 00:10:14.676 05:27:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3207355 00:10:15.612 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.613 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:15.613 00:10:15.613 real 0m47.585s 00:10:15.613 user 1m13.478s 00:10:15.613 sys 0m12.497s 00:10:15.613 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.613 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:15.613 ************************************ 00:10:15.613 END TEST nvmf_lvs_grow 00:10:15.613 ************************************ 00:10:15.613 05:27:12 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:10:15.613 05:27:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.613 05:27:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.613 05:27:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.613 ************************************ 00:10:15.613 START TEST nvmf_bdev_io_wait 00:10:15.613 ************************************ 00:10:15.613 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:10:15.872 * Looking for test storage... 00:10:15.872 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.872 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.873 --rc genhtml_branch_coverage=1 00:10:15.873 --rc genhtml_function_coverage=1 00:10:15.873 --rc genhtml_legend=1 00:10:15.873 --rc geninfo_all_blocks=1 00:10:15.873 --rc geninfo_unexecuted_blocks=1 00:10:15.873 00:10:15.873 ' 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.873 --rc genhtml_branch_coverage=1 00:10:15.873 --rc genhtml_function_coverage=1 00:10:15.873 --rc genhtml_legend=1 00:10:15.873 --rc geninfo_all_blocks=1 00:10:15.873 --rc geninfo_unexecuted_blocks=1 00:10:15.873 00:10:15.873 ' 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.873 --rc genhtml_branch_coverage=1 00:10:15.873 --rc genhtml_function_coverage=1 00:10:15.873 --rc genhtml_legend=1 00:10:15.873 --rc geninfo_all_blocks=1 00:10:15.873 --rc geninfo_unexecuted_blocks=1 00:10:15.873 00:10:15.873 ' 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.873 --rc genhtml_branch_coverage=1 00:10:15.873 --rc genhtml_function_coverage=1 00:10:15.873 --rc genhtml_legend=1 00:10:15.873 --rc geninfo_all_blocks=1 00:10:15.873 --rc geninfo_unexecuted_blocks=1 00:10:15.873 00:10:15.873 ' 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.873 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.874 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.874 05:27:12 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.997 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:23.998 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:23.998 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:23.998 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:23.998 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:23.998 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:23.998 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:23.998 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:23.998 altname enp217s0f0np0 00:10:23.998 altname ens818f0np0 00:10:23.998 inet 192.168.100.8/24 scope global mlx_0_0 00:10:23.998 valid_lft forever preferred_lft forever 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:24.258 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:24.258 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:24.258 altname enp217s0f1np1 00:10:24.258 altname ens818f1np1 00:10:24.258 inet 192.168.100.9/24 scope global mlx_0_1 00:10:24.258 valid_lft forever preferred_lft forever 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:24.258 192.168.100.9' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:24.258 192.168.100.9' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:24.258 192.168.100.9' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3212285 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3212285 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3212285 ']' 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.258 05:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.258 [2024-11-27 05:27:20.823261] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:24.258 [2024-11-27 05:27:20.823365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.517 [2024-11-27 05:27:20.979320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.518 [2024-11-27 05:27:21.078861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.518 [2024-11-27 05:27:21.078912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.518 [2024-11-27 05:27:21.078925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.518 [2024-11-27 05:27:21.078939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.518 [2024-11-27 05:27:21.078949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.518 [2024-11-27 05:27:21.081737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.518 [2024-11-27 05:27:21.081762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.518 [2024-11-27 05:27:21.081822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.518 [2024-11-27 05:27:21.081829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.086 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.086 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:25.086 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:25.086 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:25.086 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:25.086 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.086 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:25.086 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.086 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:25.346 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.346 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:25.346 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.346 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:25.346 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.346 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:25.346 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.346 05:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:25.346 [2024-11-27 05:27:21.929101] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fd3c51bd940) succeed. 00:10:25.604 [2024-11-27 05:27:21.939190] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fd3c5179940) succeed. 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:25.864 Malloc0 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:25.864 [2024-11-27 05:27:22.306306] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:25.864 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3212585 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3212587 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:25.865 { 00:10:25.865 "params": { 00:10:25.865 "name": "Nvme$subsystem", 00:10:25.865 "trtype": "$TEST_TRANSPORT", 00:10:25.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:25.865 "adrfam": "ipv4", 00:10:25.865 "trsvcid": "$NVMF_PORT", 00:10:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:25.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:25.865 "hdgst": ${hdgst:-false}, 00:10:25.865 "ddgst": ${ddgst:-false} 00:10:25.865 }, 00:10:25.865 "method": "bdev_nvme_attach_controller" 00:10:25.865 } 00:10:25.865 EOF 00:10:25.865 )") 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3212589 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:25.865 { 00:10:25.865 "params": { 00:10:25.865 "name": "Nvme$subsystem", 00:10:25.865 "trtype": "$TEST_TRANSPORT", 00:10:25.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:25.865 "adrfam": "ipv4", 00:10:25.865 "trsvcid": "$NVMF_PORT", 00:10:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:25.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:25.865 "hdgst": ${hdgst:-false}, 00:10:25.865 "ddgst": ${ddgst:-false} 00:10:25.865 }, 00:10:25.865 "method": "bdev_nvme_attach_controller" 00:10:25.865 } 00:10:25.865 EOF 00:10:25.865 )") 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3212592 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:25.865 { 00:10:25.865 "params": { 00:10:25.865 "name": "Nvme$subsystem", 00:10:25.865 "trtype": "$TEST_TRANSPORT", 00:10:25.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:25.865 "adrfam": "ipv4", 00:10:25.865 "trsvcid": "$NVMF_PORT", 00:10:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:25.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:25.865 "hdgst": ${hdgst:-false}, 00:10:25.865 "ddgst": ${ddgst:-false} 00:10:25.865 }, 00:10:25.865 "method": "bdev_nvme_attach_controller" 00:10:25.865 } 00:10:25.865 EOF 00:10:25.865 )") 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:25.865 { 00:10:25.865 "params": { 00:10:25.865 "name": "Nvme$subsystem", 00:10:25.865 "trtype": "$TEST_TRANSPORT", 00:10:25.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:25.865 "adrfam": "ipv4", 00:10:25.865 "trsvcid": "$NVMF_PORT", 00:10:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:25.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:25.865 "hdgst": ${hdgst:-false}, 00:10:25.865 "ddgst": ${ddgst:-false} 00:10:25.865 }, 00:10:25.865 "method": "bdev_nvme_attach_controller" 00:10:25.865 } 00:10:25.865 EOF 00:10:25.865 )") 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3212585 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:25.865 "params": { 00:10:25.865 "name": "Nvme1", 00:10:25.865 "trtype": "rdma", 00:10:25.865 "traddr": "192.168.100.8", 00:10:25.865 "adrfam": "ipv4", 00:10:25.865 "trsvcid": "4420", 00:10:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:25.865 "hdgst": false, 00:10:25.865 "ddgst": false 00:10:25.865 }, 00:10:25.865 "method": "bdev_nvme_attach_controller" 00:10:25.865 }' 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:25.865 "params": { 00:10:25.865 "name": "Nvme1", 00:10:25.865 "trtype": "rdma", 00:10:25.865 "traddr": "192.168.100.8", 00:10:25.865 "adrfam": "ipv4", 00:10:25.865 "trsvcid": "4420", 00:10:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:25.865 "hdgst": false, 00:10:25.865 "ddgst": false 00:10:25.865 }, 00:10:25.865 "method": "bdev_nvme_attach_controller" 00:10:25.865 }' 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:25.865 "params": { 00:10:25.865 "name": "Nvme1", 00:10:25.865 "trtype": "rdma", 00:10:25.865 "traddr": "192.168.100.8", 00:10:25.865 "adrfam": "ipv4", 00:10:25.865 "trsvcid": "4420", 00:10:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:25.865 "hdgst": false, 00:10:25.865 "ddgst": false 00:10:25.865 }, 00:10:25.865 "method": "bdev_nvme_attach_controller" 00:10:25.865 }' 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:25.865 05:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:25.865 "params": { 00:10:25.865 "name": "Nvme1", 00:10:25.865 "trtype": "rdma", 00:10:25.865 "traddr": "192.168.100.8", 00:10:25.865 "adrfam": "ipv4", 00:10:25.865 "trsvcid": "4420", 00:10:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:25.865 "hdgst": false, 00:10:25.865 "ddgst": false 00:10:25.865 }, 00:10:25.865 "method": "bdev_nvme_attach_controller" 00:10:25.865 }' 00:10:25.865 [2024-11-27 05:27:22.392355] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:25.865 [2024-11-27 05:27:22.392453] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:25.865 [2024-11-27 05:27:22.394099] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:25.865 [2024-11-27 05:27:22.394178] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:25.865 [2024-11-27 05:27:22.396419] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:25.865 [2024-11-27 05:27:22.396508] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:25.865 [2024-11-27 05:27:22.397200] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:25.865 [2024-11-27 05:27:22.397280] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:26.124 [2024-11-27 05:27:22.672524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.381 [2024-11-27 05:27:22.766692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.381 [2024-11-27 05:27:22.771969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:26.381 [2024-11-27 05:27:22.863927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:26.381 [2024-11-27 05:27:22.873491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.381 [2024-11-27 05:27:22.928972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.638 [2024-11-27 05:27:22.976402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:26.638 [2024-11-27 05:27:23.026675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:26.638 Running I/O for 1 seconds... 00:10:26.894 Running I/O for 1 seconds... 00:10:26.894 Running I/O for 1 seconds... 00:10:26.894 Running I/O for 1 seconds... 00:10:27.827 16886.00 IOPS, 65.96 MiB/s 00:10:27.827 Latency(us) 00:10:27.827 [2024-11-27T04:27:24.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.827 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:27.827 Nvme1n1 : 1.01 16919.92 66.09 0.00 0.00 7540.10 4928.31 20237.52 00:10:27.827 [2024-11-27T04:27:24.414Z] =================================================================================================================== 00:10:27.827 [2024-11-27T04:27:24.414Z] Total : 16919.92 66.09 0.00 0.00 7540.10 4928.31 20237.52 00:10:27.827 13405.00 IOPS, 52.36 MiB/s 00:10:27.827 Latency(us) 00:10:27.827 [2024-11-27T04:27:24.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.827 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:27.827 Nvme1n1 : 1.01 13456.72 52.57 0.00 0.00 9478.77 5295.31 25690.11 00:10:27.827 [2024-11-27T04:27:24.414Z] =================================================================================================================== 00:10:27.827 [2024-11-27T04:27:24.414Z] Total : 13456.72 52.57 0.00 0.00 9478.77 5295.31 25690.11 00:10:27.827 17321.00 IOPS, 67.66 MiB/s 00:10:27.827 Latency(us) 00:10:27.827 [2024-11-27T04:27:24.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.827 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:27.827 Nvme1n1 : 1.01 17400.83 67.97 0.00 0.00 7336.51 3276.80 23907.53 00:10:27.827 [2024-11-27T04:27:24.414Z] =================================================================================================================== 00:10:27.827 [2024-11-27T04:27:24.414Z] Total : 17400.83 67.97 0.00 0.00 7336.51 3276.80 23907.53 00:10:27.827 227512.00 IOPS, 888.72 MiB/s 00:10:27.827 Latency(us) 00:10:27.827 [2024-11-27T04:27:24.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.827 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:27.827 Nvme1n1 : 1.00 227157.23 887.33 0.00 0.00 560.48 244.12 2608.33 00:10:27.827 [2024-11-27T04:27:24.414Z] =================================================================================================================== 00:10:27.827 [2024-11-27T04:27:24.414Z] Total : 227157.23 887.33 0.00 0.00 560.48 244.12 2608.33 00:10:28.395 05:27:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3212587 00:10:28.395 05:27:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3212589 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3212592 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:28.654 rmmod nvme_rdma 00:10:28.654 rmmod nvme_fabrics 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3212285 ']' 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3212285 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3212285 ']' 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3212285 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3212285 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3212285' 00:10:28.654 killing process with pid 3212285 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3212285 00:10:28.654 05:27:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3212285 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:30.563 00:10:30.563 real 0m14.585s 00:10:30.563 user 0m31.891s 00:10:30.563 sys 0m8.484s 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.563 ************************************ 00:10:30.563 END TEST nvmf_bdev_io_wait 00:10:30.563 ************************************ 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.563 ************************************ 00:10:30.563 START TEST nvmf_queue_depth 00:10:30.563 ************************************ 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:30.563 * Looking for test storage... 00:10:30.563 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.563 05:27:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.563 --rc genhtml_branch_coverage=1 00:10:30.563 --rc genhtml_function_coverage=1 00:10:30.563 --rc genhtml_legend=1 00:10:30.563 --rc geninfo_all_blocks=1 00:10:30.563 --rc geninfo_unexecuted_blocks=1 00:10:30.563 00:10:30.563 ' 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.563 --rc genhtml_branch_coverage=1 00:10:30.563 --rc genhtml_function_coverage=1 00:10:30.563 --rc genhtml_legend=1 00:10:30.563 --rc geninfo_all_blocks=1 00:10:30.563 --rc geninfo_unexecuted_blocks=1 00:10:30.563 00:10:30.563 ' 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.563 --rc genhtml_branch_coverage=1 00:10:30.563 --rc genhtml_function_coverage=1 00:10:30.563 --rc genhtml_legend=1 00:10:30.563 --rc geninfo_all_blocks=1 00:10:30.563 --rc geninfo_unexecuted_blocks=1 00:10:30.563 00:10:30.563 ' 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.563 --rc genhtml_branch_coverage=1 00:10:30.563 --rc genhtml_function_coverage=1 00:10:30.563 --rc genhtml_legend=1 00:10:30.563 --rc geninfo_all_blocks=1 00:10:30.563 --rc geninfo_unexecuted_blocks=1 00:10:30.563 00:10:30.563 ' 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:30.563 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.564 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.564 05:27:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:38.696 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:38.696 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:38.696 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:38.696 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:38.696 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:38.697 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:38.958 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:38.958 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:38.958 altname enp217s0f0np0 00:10:38.958 altname ens818f0np0 00:10:38.958 inet 192.168.100.8/24 scope global mlx_0_0 00:10:38.958 valid_lft forever preferred_lft forever 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:38.958 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:38.958 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:38.958 altname enp217s0f1np1 00:10:38.958 altname ens818f1np1 00:10:38.958 inet 192.168.100.9/24 scope global mlx_0_1 00:10:38.958 valid_lft forever preferred_lft forever 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:38.958 192.168.100.9' 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:38.958 192.168.100.9' 00:10:38.958 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:38.959 192.168.100.9' 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3217590 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3217590 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3217590 ']' 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.959 05:27:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:38.959 [2024-11-27 05:27:35.519499] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:38.959 [2024-11-27 05:27:35.519603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.218 [2024-11-27 05:27:35.675061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.218 [2024-11-27 05:27:35.771656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.218 [2024-11-27 05:27:35.771707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.218 [2024-11-27 05:27:35.771720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.219 [2024-11-27 05:27:35.771733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.219 [2024-11-27 05:27:35.771742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.219 [2024-11-27 05:27:35.773059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.788 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.788 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:39.788 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.788 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.788 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:39.788 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.788 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:39.788 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.788 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.048 [2024-11-27 05:27:36.382645] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f29bd725940) succeed. 00:10:40.048 [2024-11-27 05:27:36.391882] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f29bcdbd940) succeed. 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.048 Malloc0 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.048 [2024-11-27 05:27:36.559470] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3217756 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3217756 /var/tmp/bdevperf.sock 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3217756 ']' 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:40.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:40.048 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.049 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.049 05:27:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:40.308 [2024-11-27 05:27:36.647123] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:40.308 [2024-11-27 05:27:36.647218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217756 ] 00:10:40.308 [2024-11-27 05:27:36.802380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.569 [2024-11-27 05:27:36.902014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.138 05:27:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.138 05:27:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:41.138 05:27:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:41.138 05:27:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.138 05:27:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:41.138 NVMe0n1 00:10:41.138 05:27:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.138 05:27:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:41.138 Running I/O for 10 seconds... 00:10:43.454 15071.00 IOPS, 58.87 MiB/s [2024-11-27T04:27:40.981Z] 15360.00 IOPS, 60.00 MiB/s [2024-11-27T04:27:41.919Z] 15360.00 IOPS, 60.00 MiB/s [2024-11-27T04:27:42.857Z] 15469.25 IOPS, 60.43 MiB/s [2024-11-27T04:27:43.795Z] 15564.80 IOPS, 60.80 MiB/s [2024-11-27T04:27:44.733Z] 15536.17 IOPS, 60.69 MiB/s [2024-11-27T04:27:46.110Z] 15601.14 IOPS, 60.94 MiB/s [2024-11-27T04:27:46.679Z] 15610.75 IOPS, 60.98 MiB/s [2024-11-27T04:27:48.058Z] 15587.56 IOPS, 60.89 MiB/s [2024-11-27T04:27:48.058Z] 15606.90 IOPS, 60.96 MiB/s 00:10:51.471 Latency(us) 00:10:51.471 [2024-11-27T04:27:48.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.471 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:51.471 Verification LBA range: start 0x0 length 0x4000 00:10:51.471 NVMe0n1 : 10.04 15634.41 61.07 0.00 0.00 65284.08 7025.46 42362.47 00:10:51.471 [2024-11-27T04:27:48.058Z] =================================================================================================================== 00:10:51.471 [2024-11-27T04:27:48.058Z] Total : 15634.41 61.07 0.00 0.00 65284.08 7025.46 42362.47 00:10:51.471 { 00:10:51.471 "results": [ 00:10:51.471 { 00:10:51.471 "job": "NVMe0n1", 00:10:51.471 "core_mask": "0x1", 00:10:51.471 "workload": "verify", 00:10:51.471 "status": "finished", 00:10:51.471 "verify_range": { 00:10:51.471 "start": 0, 00:10:51.471 "length": 16384 00:10:51.471 }, 00:10:51.471 "queue_depth": 1024, 00:10:51.471 "io_size": 4096, 00:10:51.471 "runtime": 10.038818, 00:10:51.471 "iops": 15634.41034591921, 00:10:51.471 "mibps": 61.071915413746915, 00:10:51.471 "io_failed": 0, 00:10:51.471 "io_timeout": 0, 00:10:51.471 "avg_latency_us": 65284.07562582461, 00:10:51.471 "min_latency_us": 7025.4592, 00:10:51.471 "max_latency_us": 42362.4704 00:10:51.471 } 00:10:51.471 ], 00:10:51.471 "core_count": 1 00:10:51.471 } 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3217756 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3217756 ']' 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3217756 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3217756 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3217756' 00:10:51.471 killing process with pid 3217756 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3217756 00:10:51.471 Received shutdown signal, test time was about 10.000000 seconds 00:10:51.471 00:10:51.471 Latency(us) 00:10:51.471 [2024-11-27T04:27:48.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.471 [2024-11-27T04:27:48.058Z] =================================================================================================================== 00:10:51.471 [2024-11-27T04:27:48.058Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:51.471 05:27:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3217756 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:52.408 rmmod nvme_rdma 00:10:52.408 rmmod nvme_fabrics 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3217590 ']' 00:10:52.408 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3217590 00:10:52.409 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3217590 ']' 00:10:52.409 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3217590 00:10:52.409 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:52.409 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.409 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3217590 00:10:52.409 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:52.409 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:52.409 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3217590' 00:10:52.409 killing process with pid 3217590 00:10:52.409 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3217590 00:10:52.409 05:27:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3217590 00:10:53.788 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:53.788 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:53.788 00:10:53.788 real 0m23.340s 00:10:53.788 user 0m29.121s 00:10:53.788 sys 0m7.412s 00:10:53.788 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.788 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.788 ************************************ 00:10:53.788 END TEST nvmf_queue_depth 00:10:53.788 ************************************ 00:10:53.788 05:27:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:53.788 05:27:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.788 05:27:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.788 05:27:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:53.788 ************************************ 00:10:53.788 START TEST nvmf_target_multipath 00:10:53.788 ************************************ 00:10:53.788 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:54.048 * Looking for test storage... 00:10:54.048 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.048 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:54.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.049 --rc genhtml_branch_coverage=1 00:10:54.049 --rc genhtml_function_coverage=1 00:10:54.049 --rc genhtml_legend=1 00:10:54.049 --rc geninfo_all_blocks=1 00:10:54.049 --rc geninfo_unexecuted_blocks=1 00:10:54.049 00:10:54.049 ' 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:54.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.049 --rc genhtml_branch_coverage=1 00:10:54.049 --rc genhtml_function_coverage=1 00:10:54.049 --rc genhtml_legend=1 00:10:54.049 --rc geninfo_all_blocks=1 00:10:54.049 --rc geninfo_unexecuted_blocks=1 00:10:54.049 00:10:54.049 ' 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:54.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.049 --rc genhtml_branch_coverage=1 00:10:54.049 --rc genhtml_function_coverage=1 00:10:54.049 --rc genhtml_legend=1 00:10:54.049 --rc geninfo_all_blocks=1 00:10:54.049 --rc geninfo_unexecuted_blocks=1 00:10:54.049 00:10:54.049 ' 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:54.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.049 --rc genhtml_branch_coverage=1 00:10:54.049 --rc genhtml_function_coverage=1 00:10:54.049 --rc genhtml_legend=1 00:10:54.049 --rc geninfo_all_blocks=1 00:10:54.049 --rc geninfo_unexecuted_blocks=1 00:10:54.049 00:10:54.049 ' 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.049 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.049 05:27:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:04.040 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:04.040 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:04.041 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:04.041 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:04.041 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:04.041 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:04.041 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:04.041 altname enp217s0f0np0 00:11:04.041 altname ens818f0np0 00:11:04.041 inet 192.168.100.8/24 scope global mlx_0_0 00:11:04.041 valid_lft forever preferred_lft forever 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:04.041 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:04.041 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:04.041 altname enp217s0f1np1 00:11:04.041 altname ens818f1np1 00:11:04.041 inet 192.168.100.9/24 scope global mlx_0_1 00:11:04.041 valid_lft forever preferred_lft forever 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:04.041 05:27:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:04.041 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:04.041 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:04.041 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.041 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:04.041 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:04.041 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:04.041 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:04.041 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:04.042 192.168.100.9' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:04.042 192.168.100.9' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:04.042 192.168.100.9' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:11:04.042 run this test only with TCP transport for now 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:04.042 rmmod nvme_rdma 00:11:04.042 rmmod nvme_fabrics 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:04.042 00:11:04.042 real 0m8.865s 00:11:04.042 user 0m2.570s 00:11:04.042 sys 0m6.530s 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:04.042 ************************************ 00:11:04.042 END TEST nvmf_target_multipath 00:11:04.042 ************************************ 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.042 ************************************ 00:11:04.042 START TEST nvmf_zcopy 00:11:04.042 ************************************ 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:04.042 * Looking for test storage... 00:11:04.042 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:04.042 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.043 --rc genhtml_branch_coverage=1 00:11:04.043 --rc genhtml_function_coverage=1 00:11:04.043 --rc genhtml_legend=1 00:11:04.043 --rc geninfo_all_blocks=1 00:11:04.043 --rc geninfo_unexecuted_blocks=1 00:11:04.043 00:11:04.043 ' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.043 --rc genhtml_branch_coverage=1 00:11:04.043 --rc genhtml_function_coverage=1 00:11:04.043 --rc genhtml_legend=1 00:11:04.043 --rc geninfo_all_blocks=1 00:11:04.043 --rc geninfo_unexecuted_blocks=1 00:11:04.043 00:11:04.043 ' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.043 --rc genhtml_branch_coverage=1 00:11:04.043 --rc genhtml_function_coverage=1 00:11:04.043 --rc genhtml_legend=1 00:11:04.043 --rc geninfo_all_blocks=1 00:11:04.043 --rc geninfo_unexecuted_blocks=1 00:11:04.043 00:11:04.043 ' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.043 --rc genhtml_branch_coverage=1 00:11:04.043 --rc genhtml_function_coverage=1 00:11:04.043 --rc genhtml_legend=1 00:11:04.043 --rc geninfo_all_blocks=1 00:11:04.043 --rc geninfo_unexecuted_blocks=1 00:11:04.043 00:11:04.043 ' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.043 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.043 05:27:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:12.171 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:12.171 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.171 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:12.171 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:12.172 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:12.172 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:12.172 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:12.172 altname enp217s0f0np0 00:11:12.172 altname ens818f0np0 00:11:12.172 inet 192.168.100.8/24 scope global mlx_0_0 00:11:12.172 valid_lft forever preferred_lft forever 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:12.172 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:12.172 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:12.172 altname enp217s0f1np1 00:11:12.172 altname ens818f1np1 00:11:12.172 inet 192.168.100.9/24 scope global mlx_0_1 00:11:12.172 valid_lft forever preferred_lft forever 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:12.172 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.173 05:28:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:12.173 192.168.100.9' 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:12.173 192.168.100.9' 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:12.173 192.168.100.9' 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3228347 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3228347 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3228347 ']' 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.173 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:12.173 [2024-11-27 05:28:08.141991] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:12.173 [2024-11-27 05:28:08.142085] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.173 [2024-11-27 05:28:08.295439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.173 [2024-11-27 05:28:08.390100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.173 [2024-11-27 05:28:08.390150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.173 [2024-11-27 05:28:08.390162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.173 [2024-11-27 05:28:08.390192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.173 [2024-11-27 05:28:08.390202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.173 [2024-11-27 05:28:08.391683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:11:12.432 Unsupported transport: rdma 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:12.432 05:28:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:12.432 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:12.432 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:12.432 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:12.432 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:12.432 nvmf_trace.0 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:12.691 rmmod nvme_rdma 00:11:12.691 rmmod nvme_fabrics 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3228347 ']' 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3228347 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3228347 ']' 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3228347 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3228347 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3228347' 00:11:12.691 killing process with pid 3228347 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3228347 00:11:12.691 05:28:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3228347 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:14.070 00:11:14.070 real 0m10.984s 00:11:14.070 user 0m4.784s 00:11:14.070 sys 0m7.035s 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:14.070 ************************************ 00:11:14.070 END TEST nvmf_zcopy 00:11:14.070 ************************************ 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.070 ************************************ 00:11:14.070 START TEST nvmf_nmic 00:11:14.070 ************************************ 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:14.070 * Looking for test storage... 00:11:14.070 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.070 --rc genhtml_branch_coverage=1 00:11:14.070 --rc genhtml_function_coverage=1 00:11:14.070 --rc genhtml_legend=1 00:11:14.070 --rc geninfo_all_blocks=1 00:11:14.070 --rc geninfo_unexecuted_blocks=1 00:11:14.070 00:11:14.070 ' 00:11:14.070 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.070 --rc genhtml_branch_coverage=1 00:11:14.070 --rc genhtml_function_coverage=1 00:11:14.070 --rc genhtml_legend=1 00:11:14.070 --rc geninfo_all_blocks=1 00:11:14.070 --rc geninfo_unexecuted_blocks=1 00:11:14.070 00:11:14.071 ' 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.071 --rc genhtml_branch_coverage=1 00:11:14.071 --rc genhtml_function_coverage=1 00:11:14.071 --rc genhtml_legend=1 00:11:14.071 --rc geninfo_all_blocks=1 00:11:14.071 --rc geninfo_unexecuted_blocks=1 00:11:14.071 00:11:14.071 ' 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.071 --rc genhtml_branch_coverage=1 00:11:14.071 --rc genhtml_function_coverage=1 00:11:14.071 --rc genhtml_legend=1 00:11:14.071 --rc geninfo_all_blocks=1 00:11:14.071 --rc geninfo_unexecuted_blocks=1 00:11:14.071 00:11:14.071 ' 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.071 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.071 05:28:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.191 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:22.192 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:22.192 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:22.192 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:22.192 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:22.192 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:22.192 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:22.192 altname enp217s0f0np0 00:11:22.192 altname ens818f0np0 00:11:22.192 inet 192.168.100.8/24 scope global mlx_0_0 00:11:22.192 valid_lft forever preferred_lft forever 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:22.192 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:22.192 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:22.192 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:22.192 altname enp217s0f1np1 00:11:22.192 altname ens818f1np1 00:11:22.192 inet 192.168.100.9/24 scope global mlx_0_1 00:11:22.192 valid_lft forever preferred_lft forever 00:11:22.193 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:22.193 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.193 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:22.193 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:22.193 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:22.193 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:22.193 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:22.193 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:22.193 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:22.193 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:22.458 192.168.100.9' 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:22.458 192.168.100.9' 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:22.458 192.168.100.9' 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:22.458 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3232791 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3232791 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3232791 ']' 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.459 05:28:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:22.459 [2024-11-27 05:28:18.959335] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:22.459 [2024-11-27 05:28:18.959431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.717 [2024-11-27 05:28:19.112339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.717 [2024-11-27 05:28:19.211434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.717 [2024-11-27 05:28:19.211488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.717 [2024-11-27 05:28:19.211500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.717 [2024-11-27 05:28:19.211513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.717 [2024-11-27 05:28:19.211523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.717 [2024-11-27 05:28:19.214190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.717 [2024-11-27 05:28:19.214262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.717 [2024-11-27 05:28:19.214356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.717 [2024-11-27 05:28:19.214365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.286 05:28:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.286 05:28:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:23.286 05:28:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:23.286 05:28:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:23.286 05:28:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.286 05:28:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.286 05:28:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:23.286 05:28:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.286 05:28:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.286 [2024-11-27 05:28:19.848183] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fa9749b3940) succeed. 00:11:23.286 [2024-11-27 05:28:19.858122] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fa97496f940) succeed. 00:11:23.545 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.545 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:23.545 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.545 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.804 Malloc0 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.804 [2024-11-27 05:28:20.217519] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:23.804 test case1: single bdev can't be used in multiple subsystems 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.804 [2024-11-27 05:28:20.245349] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:23.804 [2024-11-27 05:28:20.245383] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:23.804 [2024-11-27 05:28:20.245396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.804 request: 00:11:23.804 { 00:11:23.804 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:23.804 "namespace": { 00:11:23.804 "bdev_name": "Malloc0", 00:11:23.804 "no_auto_visible": false, 00:11:23.804 "hide_metadata": false 00:11:23.804 }, 00:11:23.804 "method": "nvmf_subsystem_add_ns", 00:11:23.804 "req_id": 1 00:11:23.804 } 00:11:23.804 Got JSON-RPC error response 00:11:23.804 response: 00:11:23.804 { 00:11:23.804 "code": -32602, 00:11:23.804 "message": "Invalid parameters" 00:11:23.804 } 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:23.804 Adding namespace failed - expected result. 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:23.804 test case2: host connect to nvmf target in multiple paths 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.804 [2024-11-27 05:28:20.261424] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.804 05:28:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:24.742 05:28:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:11:26.194 05:28:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:26.194 05:28:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:26.194 05:28:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.194 05:28:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:26.194 05:28:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:28.216 05:28:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:28.216 05:28:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:28.216 05:28:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.216 05:28:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:28.216 05:28:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.216 05:28:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:28.216 05:28:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:28.216 [global] 00:11:28.216 thread=1 00:11:28.216 invalidate=1 00:11:28.216 rw=write 00:11:28.216 time_based=1 00:11:28.216 runtime=1 00:11:28.216 ioengine=libaio 00:11:28.216 direct=1 00:11:28.216 bs=4096 00:11:28.216 iodepth=1 00:11:28.216 norandommap=0 00:11:28.216 numjobs=1 00:11:28.216 00:11:28.216 verify_dump=1 00:11:28.216 verify_backlog=512 00:11:28.216 verify_state_save=0 00:11:28.216 do_verify=1 00:11:28.216 verify=crc32c-intel 00:11:28.216 [job0] 00:11:28.216 filename=/dev/nvme0n1 00:11:28.216 Could not set queue depth (nvme0n1) 00:11:28.216 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:28.216 fio-3.35 00:11:28.216 Starting 1 thread 00:11:29.597 00:11:29.597 job0: (groupid=0, jobs=1): err= 0: pid=3233818: Wed Nov 27 05:28:25 2024 00:11:29.597 read: IOPS=6545, BW=25.6MiB/s (26.8MB/s)(25.6MiB/1001msec) 00:11:29.597 slat (nsec): min=8212, max=34636, avg=8769.09, stdev=1037.91 00:11:29.597 clat (nsec): min=45463, max=92249, avg=64219.66, stdev=3947.79 00:11:29.597 lat (usec): min=63, max=101, avg=72.99, stdev= 4.08 00:11:29.597 clat percentiles (nsec): 00:11:29.597 | 1.00th=[57088], 5.00th=[58624], 10.00th=[59648], 20.00th=[60672], 00:11:29.597 | 30.00th=[61696], 40.00th=[62720], 50.00th=[63744], 60.00th=[64768], 00:11:29.597 | 70.00th=[66048], 80.00th=[67072], 90.00th=[69120], 95.00th=[71168], 00:11:29.597 | 99.00th=[75264], 99.50th=[76288], 99.90th=[83456], 99.95th=[85504], 00:11:29.597 | 99.99th=[92672] 00:11:29.597 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:11:29.597 slat (nsec): min=8499, max=38574, avg=11312.17, stdev=1042.85 00:11:29.597 clat (usec): min=45, max=112, avg=61.83, stdev= 3.99 00:11:29.597 lat (usec): min=63, max=151, avg=73.14, stdev= 4.14 00:11:29.597 clat percentiles (usec): 00:11:29.597 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 59], 00:11:29.597 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:11:29.597 | 70.00th=[ 64], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 69], 00:11:29.597 | 99.00th=[ 73], 99.50th=[ 75], 99.90th=[ 81], 99.95th=[ 88], 00:11:29.597 | 99.99th=[ 114] 00:11:29.597 bw ( KiB/s): min=28256, max=28256, per=100.00%, avg=28256.00, stdev= 0.00, samples=1 00:11:29.597 iops : min= 7064, max= 7064, avg=7064.00, stdev= 0.00, samples=1 00:11:29.597 lat (usec) : 50=0.02%, 100=99.96%, 250=0.02% 00:11:29.597 cpu : usr=10.90%, sys=17.00%, ctx=13209, majf=0, minf=1 00:11:29.597 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.597 issued rwts: total=6552,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.597 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.597 00:11:29.597 Run status group 0 (all jobs): 00:11:29.597 READ: bw=25.6MiB/s (26.8MB/s), 25.6MiB/s-25.6MiB/s (26.8MB/s-26.8MB/s), io=25.6MiB (26.8MB), run=1001-1001msec 00:11:29.597 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:11:29.597 00:11:29.597 Disk stats (read/write): 00:11:29.597 nvme0n1: ios=5775/6144, merge=0/0, ticks=318/328, in_queue=646, util=90.58% 00:11:29.597 05:28:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:31.504 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.504 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:31.504 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:31.504 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.504 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:31.505 rmmod nvme_rdma 00:11:31.505 rmmod nvme_fabrics 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3232791 ']' 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3232791 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3232791 ']' 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3232791 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3232791 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3232791' 00:11:31.505 killing process with pid 3232791 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3232791 00:11:31.505 05:28:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3232791 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:33.412 00:11:33.412 real 0m19.388s 00:11:33.412 user 0m52.059s 00:11:33.412 sys 0m7.555s 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.412 ************************************ 00:11:33.412 END TEST nvmf_nmic 00:11:33.412 ************************************ 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.412 ************************************ 00:11:33.412 START TEST nvmf_fio_target 00:11:33.412 ************************************ 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:33.412 * Looking for test storage... 00:11:33.412 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.412 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.413 --rc genhtml_branch_coverage=1 00:11:33.413 --rc genhtml_function_coverage=1 00:11:33.413 --rc genhtml_legend=1 00:11:33.413 --rc geninfo_all_blocks=1 00:11:33.413 --rc geninfo_unexecuted_blocks=1 00:11:33.413 00:11:33.413 ' 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.413 --rc genhtml_branch_coverage=1 00:11:33.413 --rc genhtml_function_coverage=1 00:11:33.413 --rc genhtml_legend=1 00:11:33.413 --rc geninfo_all_blocks=1 00:11:33.413 --rc geninfo_unexecuted_blocks=1 00:11:33.413 00:11:33.413 ' 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.413 --rc genhtml_branch_coverage=1 00:11:33.413 --rc genhtml_function_coverage=1 00:11:33.413 --rc genhtml_legend=1 00:11:33.413 --rc geninfo_all_blocks=1 00:11:33.413 --rc geninfo_unexecuted_blocks=1 00:11:33.413 00:11:33.413 ' 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.413 --rc genhtml_branch_coverage=1 00:11:33.413 --rc genhtml_function_coverage=1 00:11:33.413 --rc genhtml_legend=1 00:11:33.413 --rc geninfo_all_blocks=1 00:11:33.413 --rc geninfo_unexecuted_blocks=1 00:11:33.413 00:11:33.413 ' 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.413 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.413 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.674 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.674 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.674 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.674 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.674 05:28:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.674 05:28:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.674 05:28:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.674 05:28:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.674 05:28:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.795 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:41.796 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:41.796 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:41.796 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:41.796 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:41.796 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:42.056 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:42.056 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:42.056 altname enp217s0f0np0 00:11:42.056 altname ens818f0np0 00:11:42.056 inet 192.168.100.8/24 scope global mlx_0_0 00:11:42.056 valid_lft forever preferred_lft forever 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:42.056 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:42.056 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:42.056 altname enp217s0f1np1 00:11:42.056 altname ens818f1np1 00:11:42.056 inet 192.168.100.9/24 scope global mlx_0_1 00:11:42.056 valid_lft forever preferred_lft forever 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:42.056 192.168.100.9' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:42.056 192.168.100.9' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:42.056 192.168.100.9' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3238795 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3238795 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3238795 ']' 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.056 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.057 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.057 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.057 05:28:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.057 [2024-11-27 05:28:38.620921] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:42.057 [2024-11-27 05:28:38.621023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.316 [2024-11-27 05:28:38.773871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.316 [2024-11-27 05:28:38.874558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.316 [2024-11-27 05:28:38.874619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.316 [2024-11-27 05:28:38.874649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.316 [2024-11-27 05:28:38.874662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.316 [2024-11-27 05:28:38.874671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.316 [2024-11-27 05:28:38.877319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.316 [2024-11-27 05:28:38.877392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.316 [2024-11-27 05:28:38.877485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.316 [2024-11-27 05:28:38.877494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.884 05:28:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.884 05:28:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:42.884 05:28:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.884 05:28:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.884 05:28:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.884 05:28:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.884 05:28:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:43.142 [2024-11-27 05:28:39.667994] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f3cc2fbd940) succeed. 00:11:43.142 [2024-11-27 05:28:39.677534] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f3cc2f79940) succeed. 00:11:43.402 05:28:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:43.661 05:28:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:43.661 05:28:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:43.920 05:28:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:43.920 05:28:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:44.180 05:28:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:44.180 05:28:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:44.439 05:28:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:44.439 05:28:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:44.699 05:28:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:44.958 05:28:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:44.958 05:28:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:45.217 05:28:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:45.217 05:28:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:45.476 05:28:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:45.476 05:28:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:45.736 05:28:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.995 05:28:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:45.995 05:28:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:45.995 05:28:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:45.995 05:28:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.255 05:28:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:46.514 [2024-11-27 05:28:42.937230] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:46.514 05:28:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:46.773 05:28:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:47.033 05:28:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:47.970 05:28:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:47.970 05:28:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.970 05:28:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.970 05:28:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:47.970 05:28:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:47.970 05:28:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.874 05:28:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.874 05:28:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.874 05:28:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.874 05:28:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:49.874 05:28:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.874 05:28:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:49.874 05:28:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:49.874 [global] 00:11:49.874 thread=1 00:11:49.874 invalidate=1 00:11:49.874 rw=write 00:11:49.874 time_based=1 00:11:49.874 runtime=1 00:11:49.874 ioengine=libaio 00:11:49.874 direct=1 00:11:49.874 bs=4096 00:11:49.874 iodepth=1 00:11:49.874 norandommap=0 00:11:49.874 numjobs=1 00:11:49.874 00:11:49.874 verify_dump=1 00:11:49.874 verify_backlog=512 00:11:49.874 verify_state_save=0 00:11:49.874 do_verify=1 00:11:49.874 verify=crc32c-intel 00:11:49.874 [job0] 00:11:49.874 filename=/dev/nvme0n1 00:11:49.874 [job1] 00:11:49.874 filename=/dev/nvme0n2 00:11:49.874 [job2] 00:11:49.874 filename=/dev/nvme0n3 00:11:49.874 [job3] 00:11:49.874 filename=/dev/nvme0n4 00:11:50.155 Could not set queue depth (nvme0n1) 00:11:50.155 Could not set queue depth (nvme0n2) 00:11:50.155 Could not set queue depth (nvme0n3) 00:11:50.155 Could not set queue depth (nvme0n4) 00:11:50.415 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.415 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.415 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.416 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.416 fio-3.35 00:11:50.416 Starting 4 threads 00:11:51.806 00:11:51.806 job0: (groupid=0, jobs=1): err= 0: pid=3240544: Wed Nov 27 05:28:47 2024 00:11:51.806 read: IOPS=4906, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1001msec) 00:11:51.806 slat (nsec): min=8264, max=31505, avg=8778.50, stdev=897.66 00:11:51.806 clat (usec): min=58, max=148, avg=87.71, stdev=12.65 00:11:51.806 lat (usec): min=79, max=157, avg=96.49, stdev=12.72 00:11:51.806 clat percentiles (usec): 00:11:51.806 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:11:51.806 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 86], 00:11:51.806 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 108], 95.00th=[ 123], 00:11:51.806 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 139], 99.95th=[ 141], 00:11:51.806 | 99.99th=[ 149] 00:11:51.806 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:11:51.806 slat (nsec): min=10364, max=69071, avg=11356.93, stdev=1234.58 00:11:51.806 clat (usec): min=66, max=300, avg=86.30, stdev=14.78 00:11:51.806 lat (usec): min=77, max=311, avg=97.66, stdev=14.86 00:11:51.806 clat percentiles (usec): 00:11:51.806 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:11:51.806 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 83], 00:11:51.806 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 114], 95.00th=[ 118], 00:11:51.806 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 149], 99.95th=[ 155], 00:11:51.806 | 99.99th=[ 302] 00:11:51.806 bw ( KiB/s): min=20480, max=20480, per=31.26%, avg=20480.00, stdev= 0.00, samples=1 00:11:51.806 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:51.806 lat (usec) : 100=84.94%, 250=15.05%, 500=0.01% 00:11:51.806 cpu : usr=6.90%, sys=14.40%, ctx=10032, majf=0, minf=1 00:11:51.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.806 issued rwts: total=4911,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.806 job1: (groupid=0, jobs=1): err= 0: pid=3240558: Wed Nov 27 05:28:47 2024 00:11:51.806 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:11:51.806 slat (nsec): min=8293, max=30876, avg=8870.19, stdev=866.25 00:11:51.806 clat (usec): min=71, max=234, avg=105.03, stdev=28.47 00:11:51.806 lat (usec): min=84, max=244, avg=113.90, stdev=28.62 00:11:51.806 clat percentiles (usec): 00:11:51.806 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:11:51.806 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 96], 00:11:51.806 | 70.00th=[ 101], 80.00th=[ 120], 90.00th=[ 167], 95.00th=[ 176], 00:11:51.806 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 231], 99.95th=[ 235], 00:11:51.806 | 99.99th=[ 235] 00:11:51.806 write: IOPS=4455, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1001msec); 0 zone resets 00:11:51.806 slat (nsec): min=10177, max=38990, avg=11430.76, stdev=1162.07 00:11:51.806 clat (usec): min=71, max=302, avg=103.26, stdev=26.40 00:11:51.806 lat (usec): min=82, max=319, avg=114.69, stdev=26.52 00:11:51.806 clat percentiles (usec): 00:11:51.806 | 1.00th=[ 77], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:11:51.806 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 101], 00:11:51.806 | 70.00th=[ 113], 80.00th=[ 120], 90.00th=[ 153], 95.00th=[ 163], 00:11:51.806 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 212], 99.95th=[ 223], 00:11:51.806 | 99.99th=[ 302] 00:11:51.806 bw ( KiB/s): min=19736, max=19736, per=30.12%, avg=19736.00, stdev= 0.00, samples=1 00:11:51.806 iops : min= 4934, max= 4934, avg=4934.00, stdev= 0.00, samples=1 00:11:51.806 lat (usec) : 100=64.00%, 250=35.99%, 500=0.01% 00:11:51.806 cpu : usr=7.70%, sys=10.50%, ctx=8556, majf=0, minf=1 00:11:51.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.806 issued rwts: total=4096,4460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.806 job2: (groupid=0, jobs=1): err= 0: pid=3240578: Wed Nov 27 05:28:47 2024 00:11:51.806 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:51.806 slat (nsec): min=8544, max=34143, avg=9640.28, stdev=1638.07 00:11:51.806 clat (usec): min=85, max=246, avg=147.80, stdev=17.87 00:11:51.806 lat (usec): min=94, max=255, avg=157.44, stdev=17.78 00:11:51.806 clat percentiles (usec): 00:11:51.806 | 1.00th=[ 100], 5.00th=[ 127], 10.00th=[ 133], 20.00th=[ 137], 00:11:51.806 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:11:51.806 | 70.00th=[ 151], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 182], 00:11:51.806 | 99.00th=[ 198], 99.50th=[ 210], 99.90th=[ 239], 99.95th=[ 245], 00:11:51.806 | 99.99th=[ 247] 00:11:51.806 write: IOPS=3360, BW=13.1MiB/s (13.8MB/s)(13.1MiB/1001msec); 0 zone resets 00:11:51.807 slat (nsec): min=10594, max=42126, avg=12063.56, stdev=2113.31 00:11:51.807 clat (usec): min=77, max=228, avg=136.69, stdev=17.20 00:11:51.807 lat (usec): min=89, max=239, avg=148.75, stdev=17.16 00:11:51.807 clat percentiles (usec): 00:11:51.807 | 1.00th=[ 93], 5.00th=[ 116], 10.00th=[ 121], 20.00th=[ 126], 00:11:51.807 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:11:51.807 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 163], 95.00th=[ 169], 00:11:51.807 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 212], 99.95th=[ 223], 00:11:51.807 | 99.99th=[ 229] 00:11:51.807 bw ( KiB/s): min=14256, max=14256, per=21.76%, avg=14256.00, stdev= 0.00, samples=1 00:11:51.807 iops : min= 3564, max= 3564, avg=3564.00, stdev= 0.00, samples=1 00:11:51.807 lat (usec) : 100=1.79%, 250=98.21% 00:11:51.807 cpu : usr=4.60%, sys=9.60%, ctx=6436, majf=0, minf=1 00:11:51.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.807 issued rwts: total=3072,3364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.807 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.807 job3: (groupid=0, jobs=1): err= 0: pid=3240585: Wed Nov 27 05:28:47 2024 00:11:51.807 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:51.807 slat (nsec): min=8513, max=34596, avg=9386.92, stdev=1016.10 00:11:51.807 clat (usec): min=84, max=237, avg=148.14, stdev=17.11 00:11:51.807 lat (usec): min=93, max=246, avg=157.53, stdev=17.13 00:11:51.807 clat percentiles (usec): 00:11:51.807 | 1.00th=[ 102], 5.00th=[ 127], 10.00th=[ 133], 20.00th=[ 137], 00:11:51.807 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:11:51.807 | 70.00th=[ 151], 80.00th=[ 161], 90.00th=[ 174], 95.00th=[ 180], 00:11:51.807 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 217], 00:11:51.807 | 99.99th=[ 237] 00:11:51.807 write: IOPS=3447, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1001msec); 0 zone resets 00:11:51.807 slat (nsec): min=10571, max=43670, avg=11519.85, stdev=1306.05 00:11:51.807 clat (usec): min=77, max=243, avg=133.68, stdev=20.94 00:11:51.807 lat (usec): min=88, max=254, avg=145.20, stdev=20.96 00:11:51.807 clat percentiles (usec): 00:11:51.807 | 1.00th=[ 86], 5.00th=[ 92], 10.00th=[ 99], 20.00th=[ 124], 00:11:51.807 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:11:51.807 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 163], 95.00th=[ 169], 00:11:51.807 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 200], 99.95th=[ 206], 00:11:51.807 | 99.99th=[ 243] 00:11:51.807 bw ( KiB/s): min=14288, max=14288, per=21.81%, avg=14288.00, stdev= 0.00, samples=1 00:11:51.807 iops : min= 3572, max= 3572, avg=3572.00, stdev= 0.00, samples=1 00:11:51.807 lat (usec) : 100=5.90%, 250=94.10% 00:11:51.807 cpu : usr=4.80%, sys=9.30%, ctx=6523, majf=0, minf=2 00:11:51.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.807 issued rwts: total=3072,3451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.807 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.807 00:11:51.807 Run status group 0 (all jobs): 00:11:51.807 READ: bw=59.1MiB/s (62.0MB/s), 12.0MiB/s-19.2MiB/s (12.6MB/s-20.1MB/s), io=59.2MiB (62.1MB), run=1001-1001msec 00:11:51.807 WRITE: bw=64.0MiB/s (67.1MB/s), 13.1MiB/s-20.0MiB/s (13.8MB/s-20.9MB/s), io=64.0MiB (67.2MB), run=1001-1001msec 00:11:51.807 00:11:51.807 Disk stats (read/write): 00:11:51.807 nvme0n1: ios=4146/4142, merge=0/0, ticks=326/333, in_queue=659, util=84.27% 00:11:51.807 nvme0n2: ios=3584/3995, merge=0/0, ticks=316/357, in_queue=673, util=85.10% 00:11:51.807 nvme0n3: ios=2560/2901, merge=0/0, ticks=341/361, in_queue=702, util=88.44% 00:11:51.807 nvme0n4: ios=2560/2906, merge=0/0, ticks=356/362, in_queue=718, util=89.48% 00:11:51.807 05:28:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:51.807 [global] 00:11:51.807 thread=1 00:11:51.807 invalidate=1 00:11:51.807 rw=randwrite 00:11:51.807 time_based=1 00:11:51.807 runtime=1 00:11:51.807 ioengine=libaio 00:11:51.807 direct=1 00:11:51.807 bs=4096 00:11:51.807 iodepth=1 00:11:51.807 norandommap=0 00:11:51.807 numjobs=1 00:11:51.807 00:11:51.807 verify_dump=1 00:11:51.807 verify_backlog=512 00:11:51.807 verify_state_save=0 00:11:51.807 do_verify=1 00:11:51.807 verify=crc32c-intel 00:11:51.807 [job0] 00:11:51.807 filename=/dev/nvme0n1 00:11:51.807 [job1] 00:11:51.807 filename=/dev/nvme0n2 00:11:51.807 [job2] 00:11:51.807 filename=/dev/nvme0n3 00:11:51.807 [job3] 00:11:51.807 filename=/dev/nvme0n4 00:11:51.807 Could not set queue depth (nvme0n1) 00:11:51.807 Could not set queue depth (nvme0n2) 00:11:51.807 Could not set queue depth (nvme0n3) 00:11:51.807 Could not set queue depth (nvme0n4) 00:11:52.065 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:52.065 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:52.065 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:52.065 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:52.065 fio-3.35 00:11:52.065 Starting 4 threads 00:11:53.436 00:11:53.436 job0: (groupid=0, jobs=1): err= 0: pid=3241003: Wed Nov 27 05:28:49 2024 00:11:53.436 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:53.436 slat (nsec): min=8444, max=21183, avg=9072.61, stdev=726.74 00:11:53.436 clat (usec): min=85, max=271, avg=172.32, stdev=12.71 00:11:53.436 lat (usec): min=95, max=279, avg=181.40, stdev=12.71 00:11:53.436 clat percentiles (usec): 00:11:53.436 | 1.00th=[ 122], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:11:53.436 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:11:53.436 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 184], 95.00th=[ 188], 00:11:53.436 | 99.00th=[ 212], 99.50th=[ 227], 99.90th=[ 243], 99.95th=[ 247], 00:11:53.436 | 99.99th=[ 273] 00:11:53.436 write: IOPS=3046, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:11:53.436 slat (nsec): min=8465, max=58100, avg=10913.53, stdev=1372.68 00:11:53.436 clat (usec): min=81, max=239, avg=160.69, stdev=16.38 00:11:53.436 lat (usec): min=92, max=249, avg=171.60, stdev=16.42 00:11:53.436 clat percentiles (usec): 00:11:53.436 | 1.00th=[ 95], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 153], 00:11:53.436 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:11:53.436 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 174], 95.00th=[ 182], 00:11:53.436 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 229], 99.95th=[ 235], 00:11:53.436 | 99.99th=[ 239] 00:11:53.436 bw ( KiB/s): min=12288, max=12288, per=21.72%, avg=12288.00, stdev= 0.00, samples=1 00:11:53.436 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:53.436 lat (usec) : 100=1.07%, 250=98.91%, 500=0.02% 00:11:53.436 cpu : usr=4.80%, sys=7.00%, ctx=5611, majf=0, minf=1 00:11:53.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.436 issued rwts: total=2560,3050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:53.436 job1: (groupid=0, jobs=1): err= 0: pid=3241023: Wed Nov 27 05:28:49 2024 00:11:53.436 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:11:53.436 slat (nsec): min=8305, max=34785, avg=8926.09, stdev=1099.37 00:11:53.436 clat (usec): min=68, max=224, avg=95.75, stdev=30.20 00:11:53.436 lat (usec): min=77, max=233, avg=104.68, stdev=30.49 00:11:53.437 clat percentiles (usec): 00:11:53.437 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 80], 00:11:53.437 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 85], 00:11:53.437 | 70.00th=[ 87], 80.00th=[ 92], 90.00th=[ 159], 95.00th=[ 167], 00:11:53.437 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 206], 99.95th=[ 212], 00:11:53.437 | 99.99th=[ 225] 00:11:53.437 write: IOPS=4826, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1001msec); 0 zone resets 00:11:53.437 slat (nsec): min=10206, max=38429, avg=11070.40, stdev=985.48 00:11:53.437 clat (usec): min=61, max=225, avg=91.04, stdev=29.28 00:11:53.437 lat (usec): min=76, max=236, avg=102.11, stdev=29.26 00:11:53.437 clat percentiles (usec): 00:11:53.437 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 76], 00:11:53.437 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 82], 00:11:53.437 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 153], 95.00th=[ 163], 00:11:53.437 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 198], 99.95th=[ 212], 00:11:53.437 | 99.99th=[ 227] 00:11:53.437 bw ( KiB/s): min=22456, max=22456, per=39.69%, avg=22456.00, stdev= 0.00, samples=1 00:11:53.437 iops : min= 5614, max= 5614, avg=5614.00, stdev= 0.00, samples=1 00:11:53.437 lat (usec) : 100=83.82%, 250=16.18% 00:11:53.437 cpu : usr=7.60%, sys=12.30%, ctx=9439, majf=0, minf=1 00:11:53.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.437 issued rwts: total=4608,4831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:53.437 job2: (groupid=0, jobs=1): err= 0: pid=3241027: Wed Nov 27 05:28:49 2024 00:11:53.437 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:53.437 slat (nsec): min=8528, max=21187, avg=9452.82, stdev=889.09 00:11:53.437 clat (usec): min=87, max=247, avg=172.14, stdev=13.57 00:11:53.437 lat (usec): min=97, max=257, avg=181.59, stdev=13.59 00:11:53.437 clat percentiles (usec): 00:11:53.437 | 1.00th=[ 113], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 165], 00:11:53.437 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:11:53.437 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:11:53.437 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 237], 99.95th=[ 239], 00:11:53.437 | 99.99th=[ 247] 00:11:53.437 write: IOPS=3037, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec); 0 zone resets 00:11:53.437 slat (nsec): min=10414, max=42792, avg=11203.99, stdev=1173.21 00:11:53.437 clat (usec): min=84, max=230, avg=160.78, stdev=16.99 00:11:53.437 lat (usec): min=95, max=241, avg=171.98, stdev=17.01 00:11:53.437 clat percentiles (usec): 00:11:53.437 | 1.00th=[ 96], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 153], 00:11:53.437 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:11:53.437 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 188], 00:11:53.437 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 229], 99.95th=[ 231], 00:11:53.437 | 99.99th=[ 231] 00:11:53.437 bw ( KiB/s): min=12288, max=12288, per=21.72%, avg=12288.00, stdev= 0.00, samples=1 00:11:53.437 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:53.437 lat (usec) : 100=1.21%, 250=98.79% 00:11:53.437 cpu : usr=4.40%, sys=7.60%, ctx=5601, majf=0, minf=1 00:11:53.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.437 issued rwts: total=2560,3041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:53.437 job3: (groupid=0, jobs=1): err= 0: pid=3241028: Wed Nov 27 05:28:49 2024 00:11:53.437 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:53.437 slat (nsec): min=8541, max=30423, avg=9319.43, stdev=1111.44 00:11:53.437 clat (usec): min=80, max=243, avg=148.90, stdev=39.07 00:11:53.437 lat (usec): min=90, max=252, avg=158.22, stdev=39.00 00:11:53.437 clat percentiles (usec): 00:11:53.437 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 95], 00:11:53.437 | 30.00th=[ 104], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:11:53.437 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 184], 95.00th=[ 188], 00:11:53.437 | 99.00th=[ 215], 99.50th=[ 225], 99.90th=[ 237], 99.95th=[ 243], 00:11:53.437 | 99.99th=[ 243] 00:11:53.437 write: IOPS=3233, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec); 0 zone resets 00:11:53.437 slat (nsec): min=10241, max=38362, avg=11449.60, stdev=1860.08 00:11:53.437 clat (usec): min=77, max=228, avg=142.71, stdev=35.55 00:11:53.437 lat (usec): min=89, max=239, avg=154.16, stdev=35.46 00:11:53.437 clat percentiles (usec): 00:11:53.437 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 92], 00:11:53.437 | 30.00th=[ 143], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:11:53.437 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:11:53.437 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 225], 99.95th=[ 227], 00:11:53.437 | 99.99th=[ 229] 00:11:53.437 bw ( KiB/s): min=12288, max=12288, per=21.72%, avg=12288.00, stdev= 0.00, samples=1 00:11:53.437 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:53.437 lat (usec) : 100=27.12%, 250=72.88% 00:11:53.437 cpu : usr=4.60%, sys=8.50%, ctx=6309, majf=0, minf=1 00:11:53.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.437 issued rwts: total=3072,3237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:53.437 00:11:53.437 Run status group 0 (all jobs): 00:11:53.437 READ: bw=49.9MiB/s (52.4MB/s), 9.99MiB/s-18.0MiB/s (10.5MB/s-18.9MB/s), io=50.0MiB (52.4MB), run=1001-1001msec 00:11:53.437 WRITE: bw=55.3MiB/s (57.9MB/s), 11.9MiB/s-18.9MiB/s (12.4MB/s-19.8MB/s), io=55.3MiB (58.0MB), run=1001-1001msec 00:11:53.437 00:11:53.437 Disk stats (read/write): 00:11:53.437 nvme0n1: ios=2124/2560, merge=0/0, ticks=359/385, in_queue=744, util=84.17% 00:11:53.437 nvme0n2: ios=4096/4368, merge=0/0, ticks=334/345, in_queue=679, util=85.25% 00:11:53.437 nvme0n3: ios=2065/2560, merge=0/0, ticks=328/390, in_queue=718, util=88.32% 00:11:53.437 nvme0n4: ios=2181/2560, merge=0/0, ticks=336/386, in_queue=722, util=89.46% 00:11:53.437 05:28:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:53.437 [global] 00:11:53.437 thread=1 00:11:53.437 invalidate=1 00:11:53.437 rw=write 00:11:53.437 time_based=1 00:11:53.437 runtime=1 00:11:53.437 ioengine=libaio 00:11:53.437 direct=1 00:11:53.437 bs=4096 00:11:53.437 iodepth=128 00:11:53.437 norandommap=0 00:11:53.437 numjobs=1 00:11:53.437 00:11:53.437 verify_dump=1 00:11:53.437 verify_backlog=512 00:11:53.437 verify_state_save=0 00:11:53.437 do_verify=1 00:11:53.437 verify=crc32c-intel 00:11:53.437 [job0] 00:11:53.437 filename=/dev/nvme0n1 00:11:53.437 [job1] 00:11:53.437 filename=/dev/nvme0n2 00:11:53.437 [job2] 00:11:53.437 filename=/dev/nvme0n3 00:11:53.437 [job3] 00:11:53.437 filename=/dev/nvme0n4 00:11:53.437 Could not set queue depth (nvme0n1) 00:11:53.437 Could not set queue depth (nvme0n2) 00:11:53.437 Could not set queue depth (nvme0n3) 00:11:53.437 Could not set queue depth (nvme0n4) 00:11:53.695 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:53.695 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:53.695 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:53.695 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:53.695 fio-3.35 00:11:53.695 Starting 4 threads 00:11:55.067 00:11:55.067 job0: (groupid=0, jobs=1): err= 0: pid=3241442: Wed Nov 27 05:28:51 2024 00:11:55.067 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:11:55.067 slat (usec): min=2, max=1871, avg=160.20, stdev=458.81 00:11:55.067 clat (usec): min=17942, max=22378, avg=20717.40, stdev=603.14 00:11:55.067 lat (usec): min=18716, max=22449, avg=20877.59, stdev=418.43 00:11:55.067 clat percentiles (usec): 00:11:55.067 | 1.00th=[18744], 5.00th=[19268], 10.00th=[19792], 20.00th=[20317], 00:11:55.067 | 30.00th=[20579], 40.00th=[20841], 50.00th=[20841], 60.00th=[20841], 00:11:55.067 | 70.00th=[21103], 80.00th=[21103], 90.00th=[21365], 95.00th=[21365], 00:11:55.068 | 99.00th=[21627], 99.50th=[21627], 99.90th=[22152], 99.95th=[22414], 00:11:55.068 | 99.99th=[22414] 00:11:55.068 write: IOPS=3255, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1004msec); 0 zone resets 00:11:55.068 slat (usec): min=2, max=1926, avg=150.05, stdev=425.59 00:11:55.068 clat (usec): min=2099, max=21449, avg=19246.92, stdev=1898.39 00:11:55.068 lat (usec): min=3586, max=21453, avg=19396.97, stdev=1854.36 00:11:55.068 clat percentiles (usec): 00:11:55.068 | 1.00th=[ 6849], 5.00th=[17957], 10.00th=[18482], 20.00th=[19268], 00:11:55.068 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:11:55.068 | 70.00th=[19792], 80.00th=[19792], 90.00th=[20055], 95.00th=[20317], 00:11:55.068 | 99.00th=[20841], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:11:55.068 | 99.99th=[21365] 00:11:55.068 bw ( KiB/s): min=12288, max=12848, per=15.02%, avg=12568.00, stdev=395.98, samples=2 00:11:55.068 iops : min= 3072, max= 3212, avg=3142.00, stdev=98.99, samples=2 00:11:55.068 lat (msec) : 4=0.19%, 10=0.55%, 20=48.53%, 50=50.73% 00:11:55.068 cpu : usr=2.49%, sys=3.69%, ctx=709, majf=0, minf=1 00:11:55.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:55.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.068 issued rwts: total=3072,3269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.068 job1: (groupid=0, jobs=1): err= 0: pid=3241449: Wed Nov 27 05:28:51 2024 00:11:55.068 read: IOPS=10.7k, BW=42.0MiB/s (44.0MB/s)(42.0MiB/1001msec) 00:11:55.068 slat (nsec): min=1975, max=1288.1k, avg=44455.13, stdev=152528.10 00:11:55.068 clat (usec): min=4818, max=7351, avg=5955.46, stdev=333.57 00:11:55.068 lat (usec): min=4825, max=7355, avg=5999.91, stdev=346.56 00:11:55.068 clat percentiles (usec): 00:11:55.068 | 1.00th=[ 5211], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5669], 00:11:55.068 | 30.00th=[ 5800], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 5997], 00:11:55.068 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6587], 00:11:55.068 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 7177], 99.95th=[ 7308], 00:11:55.068 | 99.99th=[ 7373] 00:11:55.068 write: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(43.8MiB/1001msec); 0 zone resets 00:11:55.068 slat (usec): min=2, max=923, avg=42.00, stdev=139.83 00:11:55.068 clat (usec): min=462, max=7006, avg=5602.55, stdev=424.79 00:11:55.068 lat (usec): min=1190, max=7067, avg=5644.55, stdev=432.69 00:11:55.068 clat percentiles (usec): 00:11:55.068 | 1.00th=[ 4883], 5.00th=[ 5145], 10.00th=[ 5211], 20.00th=[ 5342], 00:11:55.068 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5669], 00:11:55.068 | 70.00th=[ 5800], 80.00th=[ 5866], 90.00th=[ 6063], 95.00th=[ 6194], 00:11:55.068 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 6980], 99.95th=[ 6980], 00:11:55.068 | 99.99th=[ 6980] 00:11:55.068 bw ( KiB/s): min=45056, max=45056, per=53.83%, avg=45056.00, stdev= 0.00, samples=1 00:11:55.068 iops : min=11264, max=11264, avg=11264.00, stdev= 0.00, samples=1 00:11:55.068 lat (usec) : 500=0.01% 00:11:55.068 lat (msec) : 2=0.15%, 4=0.19%, 10=99.66% 00:11:55.068 cpu : usr=5.80%, sys=11.50%, ctx=1579, majf=0, minf=2 00:11:55.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:55.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.068 issued rwts: total=10752,11213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.068 job2: (groupid=0, jobs=1): err= 0: pid=3241450: Wed Nov 27 05:28:51 2024 00:11:55.068 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:11:55.068 slat (usec): min=2, max=1929, avg=160.27, stdev=482.60 00:11:55.068 clat (usec): min=18037, max=21662, avg=20733.95, stdev=602.18 00:11:55.068 lat (usec): min=19605, max=22719, avg=20894.22, stdev=381.37 00:11:55.068 clat percentiles (usec): 00:11:55.068 | 1.00th=[18744], 5.00th=[19268], 10.00th=[19792], 20.00th=[20317], 00:11:55.068 | 30.00th=[20579], 40.00th=[20841], 50.00th=[20841], 60.00th=[21103], 00:11:55.068 | 70.00th=[21103], 80.00th=[21103], 90.00th=[21365], 95.00th=[21365], 00:11:55.068 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:11:55.068 | 99.99th=[21627] 00:11:55.068 write: IOPS=3249, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1004msec); 0 zone resets 00:11:55.068 slat (usec): min=2, max=2017, avg=150.17, stdev=447.27 00:11:55.068 clat (usec): min=2854, max=22232, avg=19282.29, stdev=1760.10 00:11:55.068 lat (usec): min=4452, max=22236, avg=19432.46, stdev=1704.10 00:11:55.068 clat percentiles (usec): 00:11:55.068 | 1.00th=[ 7701], 5.00th=[17957], 10.00th=[18220], 20.00th=[19268], 00:11:55.068 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:11:55.068 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20055], 95.00th=[20317], 00:11:55.068 | 99.00th=[20841], 99.50th=[20841], 99.90th=[22152], 99.95th=[22152], 00:11:55.068 | 99.99th=[22152] 00:11:55.068 bw ( KiB/s): min=12288, max=12792, per=14.98%, avg=12540.00, stdev=356.38, samples=2 00:11:55.068 iops : min= 3072, max= 3198, avg=3135.00, stdev=89.10, samples=2 00:11:55.068 lat (msec) : 4=0.02%, 10=0.68%, 20=48.04%, 50=51.26% 00:11:55.068 cpu : usr=2.39%, sys=3.89%, ctx=719, majf=0, minf=1 00:11:55.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:55.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.068 issued rwts: total=3072,3262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.068 job3: (groupid=0, jobs=1): err= 0: pid=3241451: Wed Nov 27 05:28:51 2024 00:11:55.068 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:11:55.068 slat (usec): min=2, max=1915, avg=160.36, stdev=481.47 00:11:55.068 clat (usec): min=18167, max=21666, avg=20733.26, stdev=599.28 00:11:55.068 lat (usec): min=19666, max=21673, avg=20893.62, stdev=364.74 00:11:55.068 clat percentiles (usec): 00:11:55.068 | 1.00th=[18744], 5.00th=[19268], 10.00th=[19792], 20.00th=[20317], 00:11:55.068 | 30.00th=[20579], 40.00th=[20841], 50.00th=[20841], 60.00th=[21103], 00:11:55.068 | 70.00th=[21103], 80.00th=[21103], 90.00th=[21365], 95.00th=[21365], 00:11:55.068 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:11:55.068 | 99.99th=[21627] 00:11:55.068 write: IOPS=3251, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1004msec); 0 zone resets 00:11:55.068 slat (usec): min=2, max=2047, avg=150.42, stdev=449.13 00:11:55.068 clat (usec): min=2802, max=22228, avg=19278.23, stdev=1722.59 00:11:55.068 lat (usec): min=4426, max=22232, avg=19428.66, stdev=1663.10 00:11:55.068 clat percentiles (usec): 00:11:55.068 | 1.00th=[ 7635], 5.00th=[17695], 10.00th=[18220], 20.00th=[19268], 00:11:55.068 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:11:55.068 | 70.00th=[19792], 80.00th=[19792], 90.00th=[20055], 95.00th=[20317], 00:11:55.068 | 99.00th=[20579], 99.50th=[20579], 99.90th=[22152], 99.95th=[22152], 00:11:55.068 | 99.99th=[22152] 00:11:55.068 bw ( KiB/s): min=12288, max=12816, per=15.00%, avg=12552.00, stdev=373.35, samples=2 00:11:55.068 iops : min= 3072, max= 3204, avg=3138.00, stdev=93.34, samples=2 00:11:55.068 lat (msec) : 4=0.02%, 10=0.62%, 20=48.57%, 50=50.80% 00:11:55.068 cpu : usr=1.89%, sys=3.99%, ctx=673, majf=0, minf=1 00:11:55.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:55.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.068 issued rwts: total=3072,3265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.068 00:11:55.068 Run status group 0 (all jobs): 00:11:55.068 READ: bw=77.7MiB/s (81.5MB/s), 12.0MiB/s-42.0MiB/s (12.5MB/s-44.0MB/s), io=78.0MiB (81.8MB), run=1001-1004msec 00:11:55.068 WRITE: bw=81.7MiB/s (85.7MB/s), 12.7MiB/s-43.8MiB/s (13.3MB/s-45.9MB/s), io=82.1MiB (86.1MB), run=1001-1004msec 00:11:55.068 00:11:55.068 Disk stats (read/write): 00:11:55.068 nvme0n1: ios=2610/2692, merge=0/0, ticks=13214/12965, in_queue=26179, util=84.47% 00:11:55.068 nvme0n2: ios=9046/9216, merge=0/0, ticks=13046/12266, in_queue=25312, util=85.20% 00:11:55.068 nvme0n3: ios=2560/2685, merge=0/0, ticks=13235/12993, in_queue=26228, util=88.36% 00:11:55.068 nvme0n4: ios=2560/2688, merge=0/0, ticks=13275/13005, in_queue=26280, util=89.50% 00:11:55.068 05:28:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:55.068 [global] 00:11:55.068 thread=1 00:11:55.068 invalidate=1 00:11:55.068 rw=randwrite 00:11:55.068 time_based=1 00:11:55.068 runtime=1 00:11:55.068 ioengine=libaio 00:11:55.068 direct=1 00:11:55.068 bs=4096 00:11:55.068 iodepth=128 00:11:55.068 norandommap=0 00:11:55.068 numjobs=1 00:11:55.068 00:11:55.068 verify_dump=1 00:11:55.068 verify_backlog=512 00:11:55.068 verify_state_save=0 00:11:55.068 do_verify=1 00:11:55.068 verify=crc32c-intel 00:11:55.068 [job0] 00:11:55.068 filename=/dev/nvme0n1 00:11:55.068 [job1] 00:11:55.068 filename=/dev/nvme0n2 00:11:55.068 [job2] 00:11:55.068 filename=/dev/nvme0n3 00:11:55.068 [job3] 00:11:55.068 filename=/dev/nvme0n4 00:11:55.068 Could not set queue depth (nvme0n1) 00:11:55.068 Could not set queue depth (nvme0n2) 00:11:55.068 Could not set queue depth (nvme0n3) 00:11:55.068 Could not set queue depth (nvme0n4) 00:11:55.326 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:55.326 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:55.326 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:55.326 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:55.326 fio-3.35 00:11:55.326 Starting 4 threads 00:11:56.697 00:11:56.697 job0: (groupid=0, jobs=1): err= 0: pid=3241869: Wed Nov 27 05:28:52 2024 00:11:56.697 read: IOPS=4923, BW=19.2MiB/s (20.2MB/s)(19.3MiB/1004msec) 00:11:56.697 slat (usec): min=2, max=3822, avg=100.64, stdev=321.36 00:11:56.697 clat (usec): min=3303, max=21896, avg=12975.46, stdev=6255.36 00:11:56.697 lat (usec): min=3882, max=21914, avg=13076.10, stdev=6298.81 00:11:56.697 clat percentiles (usec): 00:11:56.697 | 1.00th=[ 6390], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 7046], 00:11:56.697 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7963], 60.00th=[17433], 00:11:56.697 | 70.00th=[19530], 80.00th=[20579], 90.00th=[20841], 95.00th=[21103], 00:11:56.697 | 99.00th=[21103], 99.50th=[21103], 99.90th=[21103], 99.95th=[21365], 00:11:56.697 | 99.99th=[21890] 00:11:56.697 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:11:56.697 slat (usec): min=2, max=3591, avg=92.70, stdev=280.00 00:11:56.697 clat (usec): min=4798, max=20905, avg=12221.98, stdev=5902.62 00:11:56.697 lat (usec): min=4807, max=20910, avg=12314.68, stdev=5945.30 00:11:56.697 clat percentiles (usec): 00:11:56.697 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:11:56.697 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7504], 60.00th=[16581], 00:11:56.697 | 70.00th=[18220], 80.00th=[19268], 90.00th=[19530], 95.00th=[19792], 00:11:56.697 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:11:56.697 | 99.99th=[20841] 00:11:56.697 bw ( KiB/s): min=12416, max=28544, per=23.06%, avg=20480.00, stdev=11404.22, samples=2 00:11:56.697 iops : min= 3104, max= 7136, avg=5120.00, stdev=2851.05, samples=2 00:11:56.697 lat (msec) : 4=0.10%, 10=52.46%, 20=32.83%, 50=14.61% 00:11:56.697 cpu : usr=3.99%, sys=5.48%, ctx=1107, majf=0, minf=1 00:11:56.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:56.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:56.697 issued rwts: total=4943,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:56.697 job1: (groupid=0, jobs=1): err= 0: pid=3241874: Wed Nov 27 05:28:52 2024 00:11:56.697 read: IOPS=9618, BW=37.6MiB/s (39.4MB/s)(37.7MiB/1004msec) 00:11:56.697 slat (nsec): min=1953, max=2450.3k, avg=50581.65, stdev=188944.56 00:11:56.697 clat (usec): min=2510, max=11504, avg=6756.38, stdev=1094.79 00:11:56.697 lat (usec): min=4035, max=13070, avg=6806.96, stdev=1102.01 00:11:56.697 clat percentiles (usec): 00:11:56.697 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 5932], 00:11:56.697 | 30.00th=[ 6063], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6718], 00:11:56.697 | 70.00th=[ 7046], 80.00th=[ 7373], 90.00th=[ 8848], 95.00th=[ 9110], 00:11:56.697 | 99.00th=[ 9503], 99.50th=[10028], 99.90th=[11338], 99.95th=[11469], 00:11:56.697 | 99.99th=[11469] 00:11:56.697 write: IOPS=9689, BW=37.8MiB/s (39.7MB/s)(38.0MiB/1004msec); 0 zone resets 00:11:56.697 slat (usec): min=2, max=2274, avg=48.06, stdev=181.71 00:11:56.697 clat (usec): min=4147, max=11329, avg=6377.76, stdev=1104.70 00:11:56.697 lat (usec): min=4155, max=11338, avg=6425.82, stdev=1114.27 00:11:56.697 clat percentiles (usec): 00:11:56.697 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5604], 00:11:56.697 | 30.00th=[ 5669], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 6390], 00:11:56.697 | 70.00th=[ 6652], 80.00th=[ 6980], 90.00th=[ 8455], 95.00th=[ 8717], 00:11:56.697 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[10290], 99.95th=[10683], 00:11:56.697 | 99.99th=[11338] 00:11:56.697 bw ( KiB/s): min=32768, max=45056, per=43.82%, avg=38912.00, stdev=8688.93, samples=2 00:11:56.697 iops : min= 8192, max=11264, avg=9728.00, stdev=2172.23, samples=2 00:11:56.697 lat (msec) : 4=0.01%, 10=99.57%, 20=0.43% 00:11:56.697 cpu : usr=5.98%, sys=8.77%, ctx=1217, majf=0, minf=2 00:11:56.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:56.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:56.697 issued rwts: total=9657,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:56.697 job2: (groupid=0, jobs=1): err= 0: pid=3241875: Wed Nov 27 05:28:52 2024 00:11:56.697 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:56.697 slat (usec): min=2, max=1880, avg=137.51, stdev=329.79 00:11:56.697 clat (usec): min=12675, max=22794, avg=17781.75, stdev=2648.44 00:11:56.697 lat (usec): min=13672, max=22805, avg=17919.26, stdev=2649.48 00:11:56.697 clat percentiles (usec): 00:11:56.697 | 1.00th=[13566], 5.00th=[14091], 10.00th=[14353], 20.00th=[14615], 00:11:56.697 | 30.00th=[16057], 40.00th=[16909], 50.00th=[17171], 60.00th=[19792], 00:11:56.697 | 70.00th=[20317], 80.00th=[20841], 90.00th=[20841], 95.00th=[21103], 00:11:56.697 | 99.00th=[21103], 99.50th=[21103], 99.90th=[21365], 99.95th=[22676], 00:11:56.697 | 99.99th=[22676] 00:11:56.697 write: IOPS=3715, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1003msec); 0 zone resets 00:11:56.697 slat (usec): min=2, max=1796, avg=130.65, stdev=311.53 00:11:56.697 clat (usec): min=2079, max=22079, avg=16854.62, stdev=2847.07 00:11:56.697 lat (usec): min=2891, max=22088, avg=16985.27, stdev=2848.88 00:11:56.697 clat percentiles (usec): 00:11:56.697 | 1.00th=[ 6259], 5.00th=[13304], 10.00th=[13566], 20.00th=[14091], 00:11:56.697 | 30.00th=[15664], 40.00th=[16188], 50.00th=[16581], 60.00th=[18744], 00:11:56.697 | 70.00th=[19268], 80.00th=[19530], 90.00th=[19792], 95.00th=[19792], 00:11:56.697 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:11:56.697 | 99.99th=[22152] 00:11:56.697 bw ( KiB/s): min=12392, max=16408, per=16.22%, avg=14400.00, stdev=2839.74, samples=2 00:11:56.697 iops : min= 3098, max= 4102, avg=3600.00, stdev=709.94, samples=2 00:11:56.697 lat (msec) : 4=0.29%, 10=0.81%, 20=78.84%, 50=20.07% 00:11:56.697 cpu : usr=2.69%, sys=4.59%, ctx=1094, majf=0, minf=1 00:11:56.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:56.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:56.697 issued rwts: total=3584,3727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:56.697 job3: (groupid=0, jobs=1): err= 0: pid=3241876: Wed Nov 27 05:28:52 2024 00:11:56.697 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:56.697 slat (usec): min=2, max=2703, avg=137.28, stdev=430.97 00:11:56.697 clat (usec): min=11744, max=21293, avg=17775.10, stdev=2651.74 00:11:56.697 lat (usec): min=13667, max=21297, avg=17912.38, stdev=2638.02 00:11:56.697 clat percentiles (usec): 00:11:56.697 | 1.00th=[13304], 5.00th=[14222], 10.00th=[14484], 20.00th=[14615], 00:11:56.697 | 30.00th=[16057], 40.00th=[16909], 50.00th=[17171], 60.00th=[19530], 00:11:56.697 | 70.00th=[20317], 80.00th=[20841], 90.00th=[21103], 95.00th=[21103], 00:11:56.697 | 99.00th=[21103], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:11:56.697 | 99.99th=[21365] 00:11:56.697 write: IOPS=3701, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1003msec); 0 zone resets 00:11:56.697 slat (usec): min=2, max=2899, avg=131.27, stdev=403.12 00:11:56.697 clat (usec): min=1977, max=20903, avg=16911.88, stdev=2769.28 00:11:56.697 lat (usec): min=3912, max=21108, avg=17043.15, stdev=2755.67 00:11:56.697 clat percentiles (usec): 00:11:56.697 | 1.00th=[ 7242], 5.00th=[13435], 10.00th=[13698], 20.00th=[14091], 00:11:56.697 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16581], 60.00th=[18744], 00:11:56.697 | 70.00th=[19268], 80.00th=[19530], 90.00th=[19792], 95.00th=[20055], 00:11:56.697 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:11:56.697 | 99.99th=[20841] 00:11:56.697 bw ( KiB/s): min=12336, max=16352, per=16.15%, avg=14344.00, stdev=2839.74, samples=2 00:11:56.697 iops : min= 3084, max= 4088, avg=3586.00, stdev=709.94, samples=2 00:11:56.697 lat (msec) : 2=0.01%, 4=0.10%, 10=0.78%, 20=79.02%, 50=20.09% 00:11:56.697 cpu : usr=2.59%, sys=4.59%, ctx=822, majf=0, minf=1 00:11:56.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:56.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:56.697 issued rwts: total=3584,3713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:56.697 00:11:56.697 Run status group 0 (all jobs): 00:11:56.697 READ: bw=84.7MiB/s (88.8MB/s), 14.0MiB/s-37.6MiB/s (14.6MB/s-39.4MB/s), io=85.0MiB (89.2MB), run=1003-1004msec 00:11:56.697 WRITE: bw=86.7MiB/s (90.9MB/s), 14.5MiB/s-37.8MiB/s (15.2MB/s-39.7MB/s), io=87.1MiB (91.3MB), run=1003-1004msec 00:11:56.697 00:11:56.697 Disk stats (read/write): 00:11:56.697 nvme0n1: ios=4250/4608, merge=0/0, ticks=12734/13201, in_queue=25935, util=84.55% 00:11:56.697 nvme0n2: ios=8251/8704, merge=0/0, ticks=25464/25401, in_queue=50865, util=85.29% 00:11:56.697 nvme0n3: ios=2714/3072, merge=0/0, ticks=12771/13423, in_queue=26194, util=88.45% 00:11:56.697 nvme0n4: ios=2713/3072, merge=0/0, ticks=12616/13402, in_queue=26018, util=89.40% 00:11:56.697 05:28:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:56.697 05:28:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3241976 00:11:56.697 05:28:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:56.697 05:28:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:56.697 [global] 00:11:56.697 thread=1 00:11:56.697 invalidate=1 00:11:56.697 rw=read 00:11:56.697 time_based=1 00:11:56.697 runtime=10 00:11:56.697 ioengine=libaio 00:11:56.697 direct=1 00:11:56.698 bs=4096 00:11:56.698 iodepth=1 00:11:56.698 norandommap=1 00:11:56.698 numjobs=1 00:11:56.698 00:11:56.698 [job0] 00:11:56.698 filename=/dev/nvme0n1 00:11:56.698 [job1] 00:11:56.698 filename=/dev/nvme0n2 00:11:56.698 [job2] 00:11:56.698 filename=/dev/nvme0n3 00:11:56.698 [job3] 00:11:56.698 filename=/dev/nvme0n4 00:11:56.698 Could not set queue depth (nvme0n1) 00:11:56.698 Could not set queue depth (nvme0n2) 00:11:56.698 Could not set queue depth (nvme0n3) 00:11:56.698 Could not set queue depth (nvme0n4) 00:11:56.955 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.955 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.955 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.955 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.955 fio-3.35 00:11:56.955 Starting 4 threads 00:11:59.480 05:28:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:59.737 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=75378688, buflen=4096 00:11:59.738 fio: pid=3242302, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:59.738 05:28:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:59.738 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=75665408, buflen=4096 00:11:59.738 fio: pid=3242301, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:59.738 05:28:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:59.738 05:28:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:59.996 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=60063744, buflen=4096 00:11:59.996 fio: pid=3242299, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:00.253 05:28:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:00.253 05:28:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:00.511 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5939200, buflen=4096 00:12:00.511 fio: pid=3242300, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:00.511 00:12:00.511 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3242299: Wed Nov 27 05:28:56 2024 00:12:00.511 read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(121MiB/3088msec) 00:12:00.511 slat (usec): min=8, max=15920, avg=10.23, stdev=138.02 00:12:00.511 clat (usec): min=52, max=11439, avg=86.82, stdev=85.07 00:12:00.511 lat (usec): min=60, max=16039, avg=97.05, stdev=162.31 00:12:00.511 clat percentiles (usec): 00:12:00.511 | 1.00th=[ 64], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 81], 00:12:00.511 | 30.00th=[ 83], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 86], 00:12:00.511 | 70.00th=[ 87], 80.00th=[ 89], 90.00th=[ 94], 95.00th=[ 102], 00:12:00.511 | 99.00th=[ 126], 99.50th=[ 131], 99.90th=[ 139], 99.95th=[ 163], 00:12:00.511 | 99.99th=[ 938] 00:12:00.511 bw ( KiB/s): min=42080, max=42144, per=35.52%, avg=42107.20, stdev=27.48, samples=5 00:12:00.511 iops : min=10520, max=10536, avg=10526.80, stdev= 6.87, samples=5 00:12:00.511 lat (usec) : 100=94.58%, 250=5.39%, 500=0.01%, 1000=0.01% 00:12:00.511 lat (msec) : 10=0.01%, 20=0.01% 00:12:00.511 cpu : usr=5.28%, sys=13.61%, ctx=31054, majf=0, minf=1 00:12:00.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:00.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.511 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.511 issued rwts: total=31049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:00.511 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3242300: Wed Nov 27 05:28:56 2024 00:12:00.511 read: IOPS=9927, BW=38.8MiB/s (40.7MB/s)(134MiB/3447msec) 00:12:00.511 slat (usec): min=3, max=26692, avg=10.35, stdev=207.13 00:12:00.511 clat (usec): min=38, max=419, avg=88.86, stdev=24.03 00:12:00.511 lat (usec): min=51, max=26805, avg=99.20, stdev=208.77 00:12:00.511 clat percentiles (usec): 00:12:00.511 | 1.00th=[ 53], 5.00th=[ 57], 10.00th=[ 60], 20.00th=[ 79], 00:12:00.511 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 87], 00:12:00.511 | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 133], 95.00th=[ 145], 00:12:00.511 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 198], 99.95th=[ 202], 00:12:00.511 | 99.99th=[ 215] 00:12:00.511 bw ( KiB/s): min=31232, max=41848, per=31.87%, avg=37770.67, stdev=4640.03, samples=6 00:12:00.511 iops : min= 7808, max=10462, avg=9442.67, stdev=1160.01, samples=6 00:12:00.511 lat (usec) : 50=0.05%, 100=84.29%, 250=15.65%, 500=0.01% 00:12:00.511 cpu : usr=4.99%, sys=11.43%, ctx=34227, majf=0, minf=2 00:12:00.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:00.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.511 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.511 issued rwts: total=34219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:00.511 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3242301: Wed Nov 27 05:28:56 2024 00:12:00.511 read: IOPS=6461, BW=25.2MiB/s (26.5MB/s)(72.2MiB/2859msec) 00:12:00.511 slat (usec): min=8, max=15804, avg=11.16, stdev=154.65 00:12:00.511 clat (usec): min=77, max=424, avg=140.87, stdev=24.66 00:12:00.511 lat (usec): min=86, max=15922, avg=152.04, stdev=156.29 00:12:00.511 clat percentiles (usec): 00:12:00.512 | 1.00th=[ 87], 5.00th=[ 92], 10.00th=[ 100], 20.00th=[ 129], 00:12:00.512 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:12:00.512 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 188], 00:12:00.512 | 99.00th=[ 202], 99.50th=[ 204], 99.90th=[ 212], 99.95th=[ 217], 00:12:00.512 | 99.99th=[ 231] 00:12:00.512 bw ( KiB/s): min=25256, max=25920, per=21.53%, avg=25524.80, stdev=270.62, samples=5 00:12:00.512 iops : min= 6314, max= 6480, avg=6381.20, stdev=67.66, samples=5 00:12:00.512 lat (usec) : 100=10.24%, 250=89.75%, 500=0.01% 00:12:00.512 cpu : usr=3.50%, sys=8.71%, ctx=18476, majf=0, minf=2 00:12:00.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:00.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.512 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.512 issued rwts: total=18474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:00.512 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3242302: Wed Nov 27 05:28:56 2024 00:12:00.512 read: IOPS=6979, BW=27.3MiB/s (28.6MB/s)(71.9MiB/2637msec) 00:12:00.512 slat (nsec): min=8345, max=46346, avg=9079.20, stdev=954.13 00:12:00.512 clat (usec): min=80, max=232, avg=132.41, stdev=29.87 00:12:00.512 lat (usec): min=88, max=241, avg=141.49, stdev=29.99 00:12:00.512 clat percentiles (usec): 00:12:00.512 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 98], 00:12:00.512 | 30.00th=[ 104], 40.00th=[ 135], 50.00th=[ 143], 60.00th=[ 147], 00:12:00.512 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 188], 00:12:00.512 | 99.00th=[ 202], 99.50th=[ 204], 99.90th=[ 212], 99.95th=[ 217], 00:12:00.512 | 99.99th=[ 227] 00:12:00.512 bw ( KiB/s): min=25264, max=33720, per=23.78%, avg=28184.00, stdev=3992.53, samples=5 00:12:00.512 iops : min= 6316, max= 8430, avg=7046.00, stdev=998.13, samples=5 00:12:00.512 lat (usec) : 100=24.13%, 250=75.86% 00:12:00.512 cpu : usr=3.30%, sys=10.05%, ctx=18405, majf=0, minf=2 00:12:00.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:00.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.512 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.512 issued rwts: total=18404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:00.512 00:12:00.512 Run status group 0 (all jobs): 00:12:00.512 READ: bw=116MiB/s (121MB/s), 25.2MiB/s-39.3MiB/s (26.5MB/s-41.2MB/s), io=399MiB (418MB), run=2637-3447msec 00:12:00.512 00:12:00.512 Disk stats (read/write): 00:12:00.512 nvme0n1: ios=28935/0, merge=0/0, ticks=2218/0, in_queue=2218, util=94.29% 00:12:00.512 nvme0n2: ios=32697/0, merge=0/0, ticks=2656/0, in_queue=2656, util=93.67% 00:12:00.512 nvme0n3: ios=18473/0, merge=0/0, ticks=2413/0, in_queue=2413, util=95.48% 00:12:00.512 nvme0n4: ios=18151/0, merge=0/0, ticks=2218/0, in_queue=2218, util=96.46% 00:12:00.512 05:28:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:00.512 05:28:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:01.076 05:28:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:01.076 05:28:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:01.334 05:28:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:01.334 05:28:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:01.898 05:28:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:01.898 05:28:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:02.157 05:28:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:02.157 05:28:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:02.415 05:28:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:02.415 05:28:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3241976 00:12:02.415 05:28:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:02.415 05:28:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.346 05:28:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.346 05:28:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:03.346 05:28:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:03.346 05:28:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.346 05:28:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:03.346 05:28:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.346 05:28:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:03.346 05:28:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:03.346 05:28:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:03.346 nvmf hotplug test: fio failed as expected 00:12:03.346 05:28:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:03.604 rmmod nvme_rdma 00:12:03.604 rmmod nvme_fabrics 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3238795 ']' 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3238795 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3238795 ']' 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3238795 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.604 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3238795 00:12:03.861 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.862 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.862 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3238795' 00:12:03.862 killing process with pid 3238795 00:12:03.862 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3238795 00:12:03.862 05:29:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3238795 00:12:05.762 05:29:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.762 05:29:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:05.762 00:12:05.762 real 0m32.125s 00:12:05.762 user 2m21.251s 00:12:05.762 sys 0m11.897s 00:12:05.762 05:29:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.762 05:29:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.762 ************************************ 00:12:05.762 END TEST nvmf_fio_target 00:12:05.762 ************************************ 00:12:05.762 05:29:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:05.762 05:29:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.762 05:29:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.762 05:29:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:05.762 ************************************ 00:12:05.762 START TEST nvmf_bdevio 00:12:05.762 ************************************ 00:12:05.762 05:29:01 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:05.762 * Looking for test storage... 00:12:05.762 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:05.762 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:05.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.763 --rc genhtml_branch_coverage=1 00:12:05.763 --rc genhtml_function_coverage=1 00:12:05.763 --rc genhtml_legend=1 00:12:05.763 --rc geninfo_all_blocks=1 00:12:05.763 --rc geninfo_unexecuted_blocks=1 00:12:05.763 00:12:05.763 ' 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:05.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.763 --rc genhtml_branch_coverage=1 00:12:05.763 --rc genhtml_function_coverage=1 00:12:05.763 --rc genhtml_legend=1 00:12:05.763 --rc geninfo_all_blocks=1 00:12:05.763 --rc geninfo_unexecuted_blocks=1 00:12:05.763 00:12:05.763 ' 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:05.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.763 --rc genhtml_branch_coverage=1 00:12:05.763 --rc genhtml_function_coverage=1 00:12:05.763 --rc genhtml_legend=1 00:12:05.763 --rc geninfo_all_blocks=1 00:12:05.763 --rc geninfo_unexecuted_blocks=1 00:12:05.763 00:12:05.763 ' 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:05.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.763 --rc genhtml_branch_coverage=1 00:12:05.763 --rc genhtml_function_coverage=1 00:12:05.763 --rc genhtml_legend=1 00:12:05.763 --rc geninfo_all_blocks=1 00:12:05.763 --rc geninfo_unexecuted_blocks=1 00:12:05.763 00:12:05.763 ' 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.763 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.764 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.764 05:29:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:15.737 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:15.737 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:15.737 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:15.737 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:15.737 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:15.738 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:15.738 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:15.738 altname enp217s0f0np0 00:12:15.738 altname ens818f0np0 00:12:15.738 inet 192.168.100.8/24 scope global mlx_0_0 00:12:15.738 valid_lft forever preferred_lft forever 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:15.738 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:15.738 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:15.738 altname enp217s0f1np1 00:12:15.738 altname ens818f1np1 00:12:15.738 inet 192.168.100.9/24 scope global mlx_0_1 00:12:15.738 valid_lft forever preferred_lft forever 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:15.738 192.168.100.9' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:15.738 192.168.100.9' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:15.738 192.168.100.9' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3247840 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3247840 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3247840 ']' 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.738 05:29:10 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.738 [2024-11-27 05:29:11.069289] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:15.738 [2024-11-27 05:29:11.069387] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.738 [2024-11-27 05:29:11.220323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.738 [2024-11-27 05:29:11.326156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.738 [2024-11-27 05:29:11.326204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.738 [2024-11-27 05:29:11.326217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.738 [2024-11-27 05:29:11.326231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.738 [2024-11-27 05:29:11.326258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.738 [2024-11-27 05:29:11.328961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:15.738 [2024-11-27 05:29:11.329057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:15.738 [2024-11-27 05:29:11.329124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.738 [2024-11-27 05:29:11.329150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:15.738 05:29:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.738 05:29:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:15.738 05:29:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.738 05:29:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.738 05:29:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.738 05:29:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.738 05:29:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:15.738 05:29:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.738 05:29:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.738 [2024-11-27 05:29:11.970419] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fdf551bd940) succeed. 00:12:15.738 [2024-11-27 05:29:11.980059] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fdf55179940) succeed. 00:12:15.738 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.738 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:15.739 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.739 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.739 Malloc0 00:12:15.739 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.739 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:15.739 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.739 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.739 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.739 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:15.739 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.739 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.996 [2024-11-27 05:29:12.331042] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:15.996 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:15.996 { 00:12:15.996 "params": { 00:12:15.996 "name": "Nvme$subsystem", 00:12:15.996 "trtype": "$TEST_TRANSPORT", 00:12:15.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:15.997 "adrfam": "ipv4", 00:12:15.997 "trsvcid": "$NVMF_PORT", 00:12:15.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:15.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:15.997 "hdgst": ${hdgst:-false}, 00:12:15.997 "ddgst": ${ddgst:-false} 00:12:15.997 }, 00:12:15.997 "method": "bdev_nvme_attach_controller" 00:12:15.997 } 00:12:15.997 EOF 00:12:15.997 )") 00:12:15.997 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:15.997 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:15.997 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:15.997 05:29:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:15.997 "params": { 00:12:15.997 "name": "Nvme1", 00:12:15.997 "trtype": "rdma", 00:12:15.997 "traddr": "192.168.100.8", 00:12:15.997 "adrfam": "ipv4", 00:12:15.997 "trsvcid": "4420", 00:12:15.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.997 "hdgst": false, 00:12:15.997 "ddgst": false 00:12:15.997 }, 00:12:15.997 "method": "bdev_nvme_attach_controller" 00:12:15.997 }' 00:12:15.997 [2024-11-27 05:29:12.405176] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:15.997 [2024-11-27 05:29:12.405273] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248130 ] 00:12:15.997 [2024-11-27 05:29:12.579704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:16.254 [2024-11-27 05:29:12.688908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.254 [2024-11-27 05:29:12.688977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.254 [2024-11-27 05:29:12.688981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.511 I/O targets: 00:12:16.511 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:16.511 00:12:16.511 00:12:16.511 CUnit - A unit testing framework for C - Version 2.1-3 00:12:16.511 http://cunit.sourceforge.net/ 00:12:16.511 00:12:16.511 00:12:16.511 Suite: bdevio tests on: Nvme1n1 00:12:16.769 Test: blockdev write read block ...passed 00:12:16.769 Test: blockdev write zeroes read block ...passed 00:12:16.769 Test: blockdev write zeroes read no split ...passed 00:12:16.769 Test: blockdev write zeroes read split ...passed 00:12:16.769 Test: blockdev write zeroes read split partial ...passed 00:12:16.769 Test: blockdev reset ...[2024-11-27 05:29:13.168063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:16.769 [2024-11-27 05:29:13.204287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:12:16.769 [2024-11-27 05:29:13.237518] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:16.769 passed 00:12:16.769 Test: blockdev write read 8 blocks ...passed 00:12:16.769 Test: blockdev write read size > 128k ...passed 00:12:16.769 Test: blockdev write read invalid size ...passed 00:12:16.769 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:16.769 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:16.769 Test: blockdev write read max offset ...passed 00:12:16.769 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:16.769 Test: blockdev writev readv 8 blocks ...passed 00:12:16.769 Test: blockdev writev readv 30 x 1block ...passed 00:12:16.769 Test: blockdev writev readv block ...passed 00:12:16.769 Test: blockdev writev readv size > 128k ...passed 00:12:16.769 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:16.769 Test: blockdev comparev and writev ...[2024-11-27 05:29:13.242904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.769 [2024-11-27 05:29:13.242942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:16.769 [2024-11-27 05:29:13.242959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.769 [2024-11-27 05:29:13.242974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:16.769 [2024-11-27 05:29:13.243168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.769 [2024-11-27 05:29:13.243186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:16.769 [2024-11-27 05:29:13.243199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.769 [2024-11-27 05:29:13.243214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:16.769 [2024-11-27 05:29:13.243382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.769 [2024-11-27 05:29:13.243401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:16.769 [2024-11-27 05:29:13.243414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.769 [2024-11-27 05:29:13.243428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:16.769 [2024-11-27 05:29:13.243594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.769 [2024-11-27 05:29:13.243626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:16.769 [2024-11-27 05:29:13.243640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.769 [2024-11-27 05:29:13.243654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:16.769 passed 00:12:16.769 Test: blockdev nvme passthru rw ...passed 00:12:16.769 Test: blockdev nvme passthru vendor specific ...[2024-11-27 05:29:13.244006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:16.769 [2024-11-27 05:29:13.244028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:16.769 [2024-11-27 05:29:13.244080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:16.769 [2024-11-27 05:29:13.244099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:16.769 [2024-11-27 05:29:13.244167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:16.769 [2024-11-27 05:29:13.244186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:16.769 [2024-11-27 05:29:13.244240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:16.769 [2024-11-27 05:29:13.244256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:16.769 passed 00:12:16.769 Test: blockdev nvme admin passthru ...passed 00:12:16.769 Test: blockdev copy ...passed 00:12:16.769 00:12:16.769 Run Summary: Type Total Ran Passed Failed Inactive 00:12:16.769 suites 1 1 n/a 0 0 00:12:16.769 tests 23 23 23 0 0 00:12:16.769 asserts 152 152 152 0 n/a 00:12:16.769 00:12:16.769 Elapsed time = 0.359 seconds 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:17.702 rmmod nvme_rdma 00:12:17.702 rmmod nvme_fabrics 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3247840 ']' 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3247840 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3247840 ']' 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3247840 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3247840 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3247840' 00:12:17.702 killing process with pid 3247840 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3247840 00:12:17.702 05:29:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3247840 00:12:19.602 05:29:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:19.602 05:29:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:19.602 00:12:19.602 real 0m14.151s 00:12:19.602 user 0m23.685s 00:12:19.602 sys 0m7.667s 00:12:19.602 05:29:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.602 05:29:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:19.602 ************************************ 00:12:19.602 END TEST nvmf_bdevio 00:12:19.602 ************************************ 00:12:19.602 05:29:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:19.602 00:12:19.602 real 5m4.908s 00:12:19.602 user 12m37.387s 00:12:19.602 sys 1m59.709s 00:12:19.602 05:29:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.602 05:29:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:19.602 ************************************ 00:12:19.602 END TEST nvmf_target_core 00:12:19.602 ************************************ 00:12:19.860 05:29:16 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:19.860 05:29:16 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.860 05:29:16 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.860 05:29:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:19.860 ************************************ 00:12:19.860 START TEST nvmf_target_extra 00:12:19.860 ************************************ 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:19.860 * Looking for test storage... 00:12:19.860 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.860 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:19.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.861 --rc genhtml_branch_coverage=1 00:12:19.861 --rc genhtml_function_coverage=1 00:12:19.861 --rc genhtml_legend=1 00:12:19.861 --rc geninfo_all_blocks=1 00:12:19.861 --rc geninfo_unexecuted_blocks=1 00:12:19.861 00:12:19.861 ' 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:19.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.861 --rc genhtml_branch_coverage=1 00:12:19.861 --rc genhtml_function_coverage=1 00:12:19.861 --rc genhtml_legend=1 00:12:19.861 --rc geninfo_all_blocks=1 00:12:19.861 --rc geninfo_unexecuted_blocks=1 00:12:19.861 00:12:19.861 ' 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:19.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.861 --rc genhtml_branch_coverage=1 00:12:19.861 --rc genhtml_function_coverage=1 00:12:19.861 --rc genhtml_legend=1 00:12:19.861 --rc geninfo_all_blocks=1 00:12:19.861 --rc geninfo_unexecuted_blocks=1 00:12:19.861 00:12:19.861 ' 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:19.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.861 --rc genhtml_branch_coverage=1 00:12:19.861 --rc genhtml_function_coverage=1 00:12:19.861 --rc genhtml_legend=1 00:12:19.861 --rc geninfo_all_blocks=1 00:12:19.861 --rc geninfo_unexecuted_blocks=1 00:12:19.861 00:12:19.861 ' 00:12:19.861 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.120 ************************************ 00:12:20.120 START TEST nvmf_example 00:12:20.120 ************************************ 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:20.120 * Looking for test storage... 00:12:20.120 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:20.120 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.121 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:20.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.380 --rc genhtml_branch_coverage=1 00:12:20.380 --rc genhtml_function_coverage=1 00:12:20.380 --rc genhtml_legend=1 00:12:20.380 --rc geninfo_all_blocks=1 00:12:20.380 --rc geninfo_unexecuted_blocks=1 00:12:20.380 00:12:20.380 ' 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:20.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.380 --rc genhtml_branch_coverage=1 00:12:20.380 --rc genhtml_function_coverage=1 00:12:20.380 --rc genhtml_legend=1 00:12:20.380 --rc geninfo_all_blocks=1 00:12:20.380 --rc geninfo_unexecuted_blocks=1 00:12:20.380 00:12:20.380 ' 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:20.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.380 --rc genhtml_branch_coverage=1 00:12:20.380 --rc genhtml_function_coverage=1 00:12:20.380 --rc genhtml_legend=1 00:12:20.380 --rc geninfo_all_blocks=1 00:12:20.380 --rc geninfo_unexecuted_blocks=1 00:12:20.380 00:12:20.380 ' 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:20.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.380 --rc genhtml_branch_coverage=1 00:12:20.380 --rc genhtml_function_coverage=1 00:12:20.380 --rc genhtml_legend=1 00:12:20.380 --rc geninfo_all_blocks=1 00:12:20.380 --rc geninfo_unexecuted_blocks=1 00:12:20.380 00:12:20.380 ' 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.380 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:20.380 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.381 05:29:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.355 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:30.356 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:30.356 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:30.356 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:30.356 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:30.356 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:30.356 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:30.356 altname enp217s0f0np0 00:12:30.356 altname ens818f0np0 00:12:30.356 inet 192.168.100.8/24 scope global mlx_0_0 00:12:30.356 valid_lft forever preferred_lft forever 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:30.356 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:30.356 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:30.356 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:30.356 altname enp217s0f1np1 00:12:30.356 altname ens818f1np1 00:12:30.356 inet 192.168.100.9/24 scope global mlx_0_1 00:12:30.356 valid_lft forever preferred_lft forever 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:30.357 192.168.100.9' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:30.357 192.168.100.9' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:30.357 192.168.100.9' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3252910 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3252910 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3252910 ']' 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.357 05:29:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:30.357 05:29:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:42.585 Initializing NVMe Controllers 00:12:42.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:42.585 Initialization complete. Launching workers. 00:12:42.585 ======================================================== 00:12:42.585 Latency(us) 00:12:42.585 Device Information : IOPS MiB/s Average min max 00:12:42.585 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 23283.10 90.95 2750.20 745.54 15961.67 00:12:42.585 ======================================================== 00:12:42.585 Total : 23283.10 90.95 2750.20 745.54 15961.67 00:12:42.585 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:42.585 rmmod nvme_rdma 00:12:42.585 rmmod nvme_fabrics 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3252910 ']' 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3252910 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3252910 ']' 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3252910 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:42.585 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.586 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3252910 00:12:42.586 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:42.586 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:42.586 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3252910' 00:12:42.586 killing process with pid 3252910 00:12:42.586 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3252910 00:12:42.586 05:29:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3252910 00:12:43.524 nvmf threads initialize successfully 00:12:43.524 bdev subsystem init successfully 00:12:43.524 created a nvmf target service 00:12:43.524 create targets's poll groups done 00:12:43.524 all subsystems of target started 00:12:43.524 nvmf target is running 00:12:43.524 all subsystems of target stopped 00:12:43.524 destroy targets's poll groups done 00:12:43.524 destroyed the nvmf target service 00:12:43.524 bdev subsystem finish successfully 00:12:43.524 nvmf threads destroy successfully 00:12:43.524 05:29:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.524 05:29:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:43.524 05:29:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:43.524 05:29:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:43.524 05:29:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.524 00:12:43.524 real 0m23.509s 00:12:43.524 user 0m58.767s 00:12:43.524 sys 0m7.273s 00:12:43.524 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.524 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.524 ************************************ 00:12:43.524 END TEST nvmf_example 00:12:43.524 ************************************ 00:12:43.524 05:29:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:43.524 05:29:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.524 05:29:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.524 05:29:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.784 ************************************ 00:12:43.784 START TEST nvmf_filesystem 00:12:43.784 ************************************ 00:12:43.784 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:43.785 * Looking for test storage... 00:12:43.785 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:43.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.785 --rc genhtml_branch_coverage=1 00:12:43.785 --rc genhtml_function_coverage=1 00:12:43.785 --rc genhtml_legend=1 00:12:43.785 --rc geninfo_all_blocks=1 00:12:43.785 --rc geninfo_unexecuted_blocks=1 00:12:43.785 00:12:43.785 ' 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:43.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.785 --rc genhtml_branch_coverage=1 00:12:43.785 --rc genhtml_function_coverage=1 00:12:43.785 --rc genhtml_legend=1 00:12:43.785 --rc geninfo_all_blocks=1 00:12:43.785 --rc geninfo_unexecuted_blocks=1 00:12:43.785 00:12:43.785 ' 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:43.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.785 --rc genhtml_branch_coverage=1 00:12:43.785 --rc genhtml_function_coverage=1 00:12:43.785 --rc genhtml_legend=1 00:12:43.785 --rc geninfo_all_blocks=1 00:12:43.785 --rc geninfo_unexecuted_blocks=1 00:12:43.785 00:12:43.785 ' 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:43.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.785 --rc genhtml_branch_coverage=1 00:12:43.785 --rc genhtml_function_coverage=1 00:12:43.785 --rc genhtml_legend=1 00:12:43.785 --rc geninfo_all_blocks=1 00:12:43.785 --rc geninfo_unexecuted_blocks=1 00:12:43.785 00:12:43.785 ' 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:43.785 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:43.786 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:43.787 #define SPDK_CONFIG_H 00:12:43.787 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:43.787 #define SPDK_CONFIG_APPS 1 00:12:43.787 #define SPDK_CONFIG_ARCH native 00:12:43.787 #define SPDK_CONFIG_ASAN 1 00:12:43.787 #undef SPDK_CONFIG_AVAHI 00:12:43.787 #undef SPDK_CONFIG_CET 00:12:43.787 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:43.787 #define SPDK_CONFIG_COVERAGE 1 00:12:43.787 #define SPDK_CONFIG_CROSS_PREFIX 00:12:43.787 #undef SPDK_CONFIG_CRYPTO 00:12:43.787 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:43.787 #undef SPDK_CONFIG_CUSTOMOCF 00:12:43.787 #undef SPDK_CONFIG_DAOS 00:12:43.787 #define SPDK_CONFIG_DAOS_DIR 00:12:43.787 #define SPDK_CONFIG_DEBUG 1 00:12:43.787 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:43.787 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:43.787 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:43.787 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:43.787 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:43.787 #undef SPDK_CONFIG_DPDK_UADK 00:12:43.787 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:43.787 #define SPDK_CONFIG_EXAMPLES 1 00:12:43.787 #undef SPDK_CONFIG_FC 00:12:43.787 #define SPDK_CONFIG_FC_PATH 00:12:43.787 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:43.787 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:43.787 #define SPDK_CONFIG_FSDEV 1 00:12:43.787 #undef SPDK_CONFIG_FUSE 00:12:43.787 #undef SPDK_CONFIG_FUZZER 00:12:43.787 #define SPDK_CONFIG_FUZZER_LIB 00:12:43.787 #undef SPDK_CONFIG_GOLANG 00:12:43.787 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:43.787 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:43.787 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:43.787 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:43.787 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:43.787 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:43.787 #undef SPDK_CONFIG_HAVE_LZ4 00:12:43.787 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:43.787 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:43.787 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:43.787 #define SPDK_CONFIG_IDXD 1 00:12:43.787 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:43.787 #undef SPDK_CONFIG_IPSEC_MB 00:12:43.787 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:43.787 #define SPDK_CONFIG_ISAL 1 00:12:43.787 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:43.787 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:43.787 #define SPDK_CONFIG_LIBDIR 00:12:43.787 #undef SPDK_CONFIG_LTO 00:12:43.787 #define SPDK_CONFIG_MAX_LCORES 128 00:12:43.787 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:43.787 #define SPDK_CONFIG_NVME_CUSE 1 00:12:43.787 #undef SPDK_CONFIG_OCF 00:12:43.787 #define SPDK_CONFIG_OCF_PATH 00:12:43.787 #define SPDK_CONFIG_OPENSSL_PATH 00:12:43.787 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:43.787 #define SPDK_CONFIG_PGO_DIR 00:12:43.787 #undef SPDK_CONFIG_PGO_USE 00:12:43.787 #define SPDK_CONFIG_PREFIX /usr/local 00:12:43.787 #undef SPDK_CONFIG_RAID5F 00:12:43.787 #undef SPDK_CONFIG_RBD 00:12:43.787 #define SPDK_CONFIG_RDMA 1 00:12:43.787 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:43.787 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:43.787 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:43.787 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:43.787 #define SPDK_CONFIG_SHARED 1 00:12:43.787 #undef SPDK_CONFIG_SMA 00:12:43.787 #define SPDK_CONFIG_TESTS 1 00:12:43.787 #undef SPDK_CONFIG_TSAN 00:12:43.787 #define SPDK_CONFIG_UBLK 1 00:12:43.787 #define SPDK_CONFIG_UBSAN 1 00:12:43.787 #undef SPDK_CONFIG_UNIT_TESTS 00:12:43.787 #undef SPDK_CONFIG_URING 00:12:43.787 #define SPDK_CONFIG_URING_PATH 00:12:43.787 #undef SPDK_CONFIG_URING_ZNS 00:12:43.787 #undef SPDK_CONFIG_USDT 00:12:43.787 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:43.787 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:43.787 #undef SPDK_CONFIG_VFIO_USER 00:12:43.787 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:43.787 #define SPDK_CONFIG_VHOST 1 00:12:43.787 #define SPDK_CONFIG_VIRTIO 1 00:12:43.787 #undef SPDK_CONFIG_VTUNE 00:12:43.787 #define SPDK_CONFIG_VTUNE_DIR 00:12:43.787 #define SPDK_CONFIG_WERROR 1 00:12:43.787 #define SPDK_CONFIG_WPDK_DIR 00:12:43.787 #undef SPDK_CONFIG_XNVME 00:12:43.787 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:43.787 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:43.788 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:44.051 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:44.052 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3255638 ]] 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3255638 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.1diidd 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.1diidd/tests/target /tmp/spdk.1diidd 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=54745567232 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730586624 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6985019392 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30850498560 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865293312 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=14794752 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12322701312 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346118144 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23416832 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30864257024 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865293312 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1036288 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:44.053 * Looking for test storage... 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:44.053 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=54745567232 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9199611904 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:44.054 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.054 --rc genhtml_branch_coverage=1 00:12:44.054 --rc genhtml_function_coverage=1 00:12:44.054 --rc genhtml_legend=1 00:12:44.054 --rc geninfo_all_blocks=1 00:12:44.054 --rc geninfo_unexecuted_blocks=1 00:12:44.054 00:12:44.054 ' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.054 --rc genhtml_branch_coverage=1 00:12:44.054 --rc genhtml_function_coverage=1 00:12:44.054 --rc genhtml_legend=1 00:12:44.054 --rc geninfo_all_blocks=1 00:12:44.054 --rc geninfo_unexecuted_blocks=1 00:12:44.054 00:12:44.054 ' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.054 --rc genhtml_branch_coverage=1 00:12:44.054 --rc genhtml_function_coverage=1 00:12:44.054 --rc genhtml_legend=1 00:12:44.054 --rc geninfo_all_blocks=1 00:12:44.054 --rc geninfo_unexecuted_blocks=1 00:12:44.054 00:12:44.054 ' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.054 --rc genhtml_branch_coverage=1 00:12:44.054 --rc genhtml_function_coverage=1 00:12:44.054 --rc genhtml_legend=1 00:12:44.054 --rc geninfo_all_blocks=1 00:12:44.054 --rc geninfo_unexecuted_blocks=1 00:12:44.054 00:12:44.054 ' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.054 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.055 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.055 05:29:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:52.292 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:52.292 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.292 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:52.293 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:52.293 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:52.293 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:52.596 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:52.596 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:52.596 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.596 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:52.596 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:52.596 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:52.596 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:52.596 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:52.597 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:52.597 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:52.597 altname enp217s0f0np0 00:12:52.597 altname ens818f0np0 00:12:52.597 inet 192.168.100.8/24 scope global mlx_0_0 00:12:52.597 valid_lft forever preferred_lft forever 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:52.597 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:52.597 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:52.597 altname enp217s0f1np1 00:12:52.597 altname ens818f1np1 00:12:52.597 inet 192.168.100.9/24 scope global mlx_0_1 00:12:52.597 valid_lft forever preferred_lft forever 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:52.597 192.168.100.9' 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:52.597 192.168.100.9' 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:52.597 192.168.100.9' 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:12:52.597 05:29:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:52.597 ************************************ 00:12:52.597 START TEST nvmf_filesystem_no_in_capsule 00:12:52.597 ************************************ 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3259578 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3259578 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3259578 ']' 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.597 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.597 [2024-11-27 05:29:49.167833] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:52.598 [2024-11-27 05:29:49.167920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.855 [2024-11-27 05:29:49.324722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.855 [2024-11-27 05:29:49.423698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.855 [2024-11-27 05:29:49.423750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.855 [2024-11-27 05:29:49.423762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.855 [2024-11-27 05:29:49.423792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.855 [2024-11-27 05:29:49.423802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.855 [2024-11-27 05:29:49.426232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.855 [2024-11-27 05:29:49.426308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.855 [2024-11-27 05:29:49.426416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.855 [2024-11-27 05:29:49.426424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.421 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.421 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:53.421 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.421 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.421 05:29:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.421 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.421 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:53.421 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:53.421 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.421 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.679 [2024-11-27 05:29:50.012694] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:53.679 [2024-11-27 05:29:50.057843] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f7f7d9a6940) succeed. 00:12:53.679 [2024-11-27 05:29:50.068009] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f7f7d962940) succeed. 00:12:53.679 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.679 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:53.679 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.679 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.247 Malloc1 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.247 [2024-11-27 05:29:50.781597] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:54.247 { 00:12:54.247 "name": "Malloc1", 00:12:54.247 "aliases": [ 00:12:54.247 "8afcbd8e-03df-4d74-9e59-b81dba87c150" 00:12:54.247 ], 00:12:54.247 "product_name": "Malloc disk", 00:12:54.247 "block_size": 512, 00:12:54.247 "num_blocks": 1048576, 00:12:54.247 "uuid": "8afcbd8e-03df-4d74-9e59-b81dba87c150", 00:12:54.247 "assigned_rate_limits": { 00:12:54.247 "rw_ios_per_sec": 0, 00:12:54.247 "rw_mbytes_per_sec": 0, 00:12:54.247 "r_mbytes_per_sec": 0, 00:12:54.247 "w_mbytes_per_sec": 0 00:12:54.247 }, 00:12:54.247 "claimed": true, 00:12:54.247 "claim_type": "exclusive_write", 00:12:54.247 "zoned": false, 00:12:54.247 "supported_io_types": { 00:12:54.247 "read": true, 00:12:54.247 "write": true, 00:12:54.247 "unmap": true, 00:12:54.247 "flush": true, 00:12:54.247 "reset": true, 00:12:54.247 "nvme_admin": false, 00:12:54.247 "nvme_io": false, 00:12:54.247 "nvme_io_md": false, 00:12:54.247 "write_zeroes": true, 00:12:54.247 "zcopy": true, 00:12:54.247 "get_zone_info": false, 00:12:54.247 "zone_management": false, 00:12:54.247 "zone_append": false, 00:12:54.247 "compare": false, 00:12:54.247 "compare_and_write": false, 00:12:54.247 "abort": true, 00:12:54.247 "seek_hole": false, 00:12:54.247 "seek_data": false, 00:12:54.247 "copy": true, 00:12:54.247 "nvme_iov_md": false 00:12:54.247 }, 00:12:54.247 "memory_domains": [ 00:12:54.247 { 00:12:54.247 "dma_device_id": "system", 00:12:54.247 "dma_device_type": 1 00:12:54.247 }, 00:12:54.247 { 00:12:54.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.247 "dma_device_type": 2 00:12:54.247 } 00:12:54.247 ], 00:12:54.247 "driver_specific": {} 00:12:54.247 } 00:12:54.247 ]' 00:12:54.247 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:54.506 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:54.506 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:54.506 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:54.506 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:54.506 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:54.506 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:54.506 05:29:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:55.442 05:29:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.442 05:29:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:55.442 05:29:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.442 05:29:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:55.442 05:29:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:57.340 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:57.340 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:57.340 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.340 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:57.340 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.340 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:57.340 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:57.340 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:57.597 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:57.597 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:57.597 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:57.597 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:57.597 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:57.597 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:57.597 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:57.597 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:57.597 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:57.597 05:29:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:57.854 05:29:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:58.786 ************************************ 00:12:58.786 START TEST filesystem_ext4 00:12:58.786 ************************************ 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:58.786 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:58.786 mke2fs 1.47.0 (5-Feb-2023) 00:12:58.786 Discarding device blocks: 0/522240 done 00:12:58.786 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:58.786 Filesystem UUID: 4eb0ae1c-f804-4fca-a519-da4803ab82ff 00:12:58.786 Superblock backups stored on blocks: 00:12:58.786 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:58.786 00:12:58.786 Allocating group tables: 0/64 done 00:12:58.786 Writing inode tables: 0/64 done 00:12:59.044 Creating journal (8192 blocks): done 00:12:59.044 Writing superblocks and filesystem accounting information: 0/64 done 00:12:59.044 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3259578 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:59.044 00:12:59.044 real 0m0.211s 00:12:59.044 user 0m0.037s 00:12:59.044 sys 0m0.074s 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:59.044 ************************************ 00:12:59.044 END TEST filesystem_ext4 00:12:59.044 ************************************ 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.044 ************************************ 00:12:59.044 START TEST filesystem_btrfs 00:12:59.044 ************************************ 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:59.044 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:59.303 btrfs-progs v6.8.1 00:12:59.303 See https://btrfs.readthedocs.io for more information. 00:12:59.303 00:12:59.303 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:59.303 NOTE: several default settings have changed in version 5.15, please make sure 00:12:59.303 this does not affect your deployments: 00:12:59.303 - DUP for metadata (-m dup) 00:12:59.303 - enabled no-holes (-O no-holes) 00:12:59.303 - enabled free-space-tree (-R free-space-tree) 00:12:59.303 00:12:59.303 Label: (null) 00:12:59.303 UUID: 838c9170-b078-4335-856a-f6b1e959928c 00:12:59.303 Node size: 16384 00:12:59.303 Sector size: 4096 (CPU page size: 4096) 00:12:59.303 Filesystem size: 510.00MiB 00:12:59.303 Block group profiles: 00:12:59.303 Data: single 8.00MiB 00:12:59.303 Metadata: DUP 32.00MiB 00:12:59.303 System: DUP 8.00MiB 00:12:59.303 SSD detected: yes 00:12:59.303 Zoned device: no 00:12:59.303 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:59.303 Checksum: crc32c 00:12:59.303 Number of devices: 1 00:12:59.303 Devices: 00:12:59.303 ID SIZE PATH 00:12:59.303 1 510.00MiB /dev/nvme0n1p1 00:12:59.303 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3259578 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:59.303 00:12:59.303 real 0m0.259s 00:12:59.303 user 0m0.035s 00:12:59.303 sys 0m0.125s 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:59.303 ************************************ 00:12:59.303 END TEST filesystem_btrfs 00:12:59.303 ************************************ 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.303 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.562 ************************************ 00:12:59.562 START TEST filesystem_xfs 00:12:59.562 ************************************ 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:59.562 05:29:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:59.562 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:59.562 = sectsz=512 attr=2, projid32bit=1 00:12:59.562 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:59.562 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:59.562 data = bsize=4096 blocks=130560, imaxpct=25 00:12:59.562 = sunit=0 swidth=0 blks 00:12:59.562 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:59.562 log =internal log bsize=4096 blocks=16384, version=2 00:12:59.562 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:59.562 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:59.562 Discarding blocks...Done. 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3259578 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:59.562 00:12:59.562 real 0m0.223s 00:12:59.562 user 0m0.033s 00:12:59.562 sys 0m0.082s 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.562 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:59.562 ************************************ 00:12:59.562 END TEST filesystem_xfs 00:12:59.562 ************************************ 00:12:59.820 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:59.820 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:59.820 05:29:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3259578 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3259578 ']' 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3259578 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259578 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259578' 00:13:00.754 killing process with pid 3259578 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3259578 00:13:00.754 05:29:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3259578 00:13:04.039 05:29:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:04.039 00:13:04.039 real 0m10.899s 00:13:04.039 user 0m40.929s 00:13:04.039 sys 0m1.449s 00:13:04.039 05:29:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.039 05:29:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.039 ************************************ 00:13:04.039 END TEST nvmf_filesystem_no_in_capsule 00:13:04.039 ************************************ 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:04.039 ************************************ 00:13:04.039 START TEST nvmf_filesystem_in_capsule 00:13:04.039 ************************************ 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3261675 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3261675 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3261675 ']' 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.039 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.039 [2024-11-27 05:30:00.150257] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:04.039 [2024-11-27 05:30:00.150351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.039 [2024-11-27 05:30:00.308907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.039 [2024-11-27 05:30:00.415514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.039 [2024-11-27 05:30:00.415564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.039 [2024-11-27 05:30:00.415577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.039 [2024-11-27 05:30:00.415590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.039 [2024-11-27 05:30:00.415600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.039 [2024-11-27 05:30:00.418126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.039 [2024-11-27 05:30:00.418200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.039 [2024-11-27 05:30:00.418262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.039 [2024-11-27 05:30:00.418269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.606 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.606 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:04.606 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:04.606 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:04.606 05:30:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.606 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.606 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:04.606 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:13:04.606 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.606 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.606 [2024-11-27 05:30:01.042340] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f974f76a940) succeed. 00:13:04.606 [2024-11-27 05:30:01.052230] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f974f726940) succeed. 00:13:04.863 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.864 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:04.864 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.864 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:05.429 Malloc1 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:05.429 [2024-11-27 05:30:01.819384] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:05.429 { 00:13:05.429 "name": "Malloc1", 00:13:05.429 "aliases": [ 00:13:05.429 "17731792-3220-45f2-8760-146749fc5c3d" 00:13:05.429 ], 00:13:05.429 "product_name": "Malloc disk", 00:13:05.429 "block_size": 512, 00:13:05.429 "num_blocks": 1048576, 00:13:05.429 "uuid": "17731792-3220-45f2-8760-146749fc5c3d", 00:13:05.429 "assigned_rate_limits": { 00:13:05.429 "rw_ios_per_sec": 0, 00:13:05.429 "rw_mbytes_per_sec": 0, 00:13:05.429 "r_mbytes_per_sec": 0, 00:13:05.429 "w_mbytes_per_sec": 0 00:13:05.429 }, 00:13:05.429 "claimed": true, 00:13:05.429 "claim_type": "exclusive_write", 00:13:05.429 "zoned": false, 00:13:05.429 "supported_io_types": { 00:13:05.429 "read": true, 00:13:05.429 "write": true, 00:13:05.429 "unmap": true, 00:13:05.429 "flush": true, 00:13:05.429 "reset": true, 00:13:05.429 "nvme_admin": false, 00:13:05.429 "nvme_io": false, 00:13:05.429 "nvme_io_md": false, 00:13:05.429 "write_zeroes": true, 00:13:05.429 "zcopy": true, 00:13:05.429 "get_zone_info": false, 00:13:05.429 "zone_management": false, 00:13:05.429 "zone_append": false, 00:13:05.429 "compare": false, 00:13:05.429 "compare_and_write": false, 00:13:05.429 "abort": true, 00:13:05.429 "seek_hole": false, 00:13:05.429 "seek_data": false, 00:13:05.429 "copy": true, 00:13:05.429 "nvme_iov_md": false 00:13:05.429 }, 00:13:05.429 "memory_domains": [ 00:13:05.429 { 00:13:05.429 "dma_device_id": "system", 00:13:05.429 "dma_device_type": 1 00:13:05.429 }, 00:13:05.429 { 00:13:05.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.429 "dma_device_type": 2 00:13:05.429 } 00:13:05.429 ], 00:13:05.429 "driver_specific": {} 00:13:05.429 } 00:13:05.429 ]' 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:05.429 05:30:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:06.364 05:30:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.364 05:30:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:06.364 05:30:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.364 05:30:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:06.364 05:30:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:08.894 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:08.895 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:08.895 05:30:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:08.895 05:30:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:08.895 05:30:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.829 ************************************ 00:13:09.829 START TEST filesystem_in_capsule_ext4 00:13:09.829 ************************************ 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:09.829 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:09.829 mke2fs 1.47.0 (5-Feb-2023) 00:13:09.829 Discarding device blocks: 0/522240 done 00:13:09.829 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:09.829 Filesystem UUID: 2f58e40c-d805-4086-bfa6-6e0e4ce55336 00:13:09.829 Superblock backups stored on blocks: 00:13:09.829 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:09.829 00:13:09.829 Allocating group tables: 0/64 done 00:13:09.829 Writing inode tables: 0/64 done 00:13:09.829 Creating journal (8192 blocks): done 00:13:10.088 Writing superblocks and filesystem accounting information: 0/64 done 00:13:10.088 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3261675 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:10.088 00:13:10.088 real 0m0.207s 00:13:10.088 user 0m0.023s 00:13:10.088 sys 0m0.084s 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:10.088 ************************************ 00:13:10.088 END TEST filesystem_in_capsule_ext4 00:13:10.088 ************************************ 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.088 ************************************ 00:13:10.088 START TEST filesystem_in_capsule_btrfs 00:13:10.088 ************************************ 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:10.088 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:10.347 btrfs-progs v6.8.1 00:13:10.347 See https://btrfs.readthedocs.io for more information. 00:13:10.347 00:13:10.347 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:10.347 NOTE: several default settings have changed in version 5.15, please make sure 00:13:10.347 this does not affect your deployments: 00:13:10.347 - DUP for metadata (-m dup) 00:13:10.347 - enabled no-holes (-O no-holes) 00:13:10.347 - enabled free-space-tree (-R free-space-tree) 00:13:10.347 00:13:10.347 Label: (null) 00:13:10.347 UUID: 6f804ec9-8496-407d-a702-c1695f80cd48 00:13:10.347 Node size: 16384 00:13:10.347 Sector size: 4096 (CPU page size: 4096) 00:13:10.347 Filesystem size: 510.00MiB 00:13:10.347 Block group profiles: 00:13:10.347 Data: single 8.00MiB 00:13:10.347 Metadata: DUP 32.00MiB 00:13:10.347 System: DUP 8.00MiB 00:13:10.347 SSD detected: yes 00:13:10.347 Zoned device: no 00:13:10.347 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:10.347 Checksum: crc32c 00:13:10.347 Number of devices: 1 00:13:10.347 Devices: 00:13:10.347 ID SIZE PATH 00:13:10.347 1 510.00MiB /dev/nvme0n1p1 00:13:10.347 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3261675 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:10.347 00:13:10.347 real 0m0.252s 00:13:10.347 user 0m0.033s 00:13:10.347 sys 0m0.121s 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.347 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:10.347 ************************************ 00:13:10.347 END TEST filesystem_in_capsule_btrfs 00:13:10.348 ************************************ 00:13:10.348 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:10.348 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:10.348 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.348 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.606 ************************************ 00:13:10.606 START TEST filesystem_in_capsule_xfs 00:13:10.606 ************************************ 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:10.606 05:30:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:10.606 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:10.606 = sectsz=512 attr=2, projid32bit=1 00:13:10.606 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:10.606 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:10.606 data = bsize=4096 blocks=130560, imaxpct=25 00:13:10.606 = sunit=0 swidth=0 blks 00:13:10.606 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:10.606 log =internal log bsize=4096 blocks=16384, version=2 00:13:10.606 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:10.606 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:10.606 Discarding blocks...Done. 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3261675 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:10.606 00:13:10.606 real 0m0.217s 00:13:10.606 user 0m0.032s 00:13:10.606 sys 0m0.076s 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.606 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:10.606 ************************************ 00:13:10.606 END TEST filesystem_in_capsule_xfs 00:13:10.606 ************************************ 00:13:10.865 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:10.865 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:10.865 05:30:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3261675 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3261675 ']' 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3261675 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3261675 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3261675' 00:13:11.800 killing process with pid 3261675 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3261675 00:13:11.800 05:30:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3261675 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:15.084 00:13:15.084 real 0m11.300s 00:13:15.084 user 0m41.940s 00:13:15.084 sys 0m1.534s 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.084 ************************************ 00:13:15.084 END TEST nvmf_filesystem_in_capsule 00:13:15.084 ************************************ 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:15.084 rmmod nvme_rdma 00:13:15.084 rmmod nvme_fabrics 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:15.084 00:13:15.084 real 0m31.328s 00:13:15.084 user 1m25.584s 00:13:15.084 sys 0m9.664s 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:15.084 ************************************ 00:13:15.084 END TEST nvmf_filesystem 00:13:15.084 ************************************ 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.084 ************************************ 00:13:15.084 START TEST nvmf_target_discovery 00:13:15.084 ************************************ 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:15.084 * Looking for test storage... 00:13:15.084 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:15.084 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:15.343 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:15.343 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.343 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.343 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.343 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.343 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.343 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.343 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.343 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.343 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:15.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.344 --rc genhtml_branch_coverage=1 00:13:15.344 --rc genhtml_function_coverage=1 00:13:15.344 --rc genhtml_legend=1 00:13:15.344 --rc geninfo_all_blocks=1 00:13:15.344 --rc geninfo_unexecuted_blocks=1 00:13:15.344 00:13:15.344 ' 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:15.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.344 --rc genhtml_branch_coverage=1 00:13:15.344 --rc genhtml_function_coverage=1 00:13:15.344 --rc genhtml_legend=1 00:13:15.344 --rc geninfo_all_blocks=1 00:13:15.344 --rc geninfo_unexecuted_blocks=1 00:13:15.344 00:13:15.344 ' 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:15.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.344 --rc genhtml_branch_coverage=1 00:13:15.344 --rc genhtml_function_coverage=1 00:13:15.344 --rc genhtml_legend=1 00:13:15.344 --rc geninfo_all_blocks=1 00:13:15.344 --rc geninfo_unexecuted_blocks=1 00:13:15.344 00:13:15.344 ' 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:15.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.344 --rc genhtml_branch_coverage=1 00:13:15.344 --rc genhtml_function_coverage=1 00:13:15.344 --rc genhtml_legend=1 00:13:15.344 --rc geninfo_all_blocks=1 00:13:15.344 --rc geninfo_unexecuted_blocks=1 00:13:15.344 00:13:15.344 ' 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.344 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.344 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.345 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.345 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.345 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.345 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.345 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:15.345 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:15.345 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:15.345 05:30:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:25.323 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:25.323 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:25.323 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.323 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:25.324 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:25.324 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:25.324 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:25.324 altname enp217s0f0np0 00:13:25.324 altname ens818f0np0 00:13:25.324 inet 192.168.100.8/24 scope global mlx_0_0 00:13:25.324 valid_lft forever preferred_lft forever 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:25.324 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:25.324 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:25.324 altname enp217s0f1np1 00:13:25.324 altname ens818f1np1 00:13:25.324 inet 192.168.100.9/24 scope global mlx_0_1 00:13:25.324 valid_lft forever preferred_lft forever 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:25.324 192.168.100.9' 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:25.324 192.168.100.9' 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:13:25.324 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:25.325 192.168.100.9' 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3268554 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3268554 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3268554 ']' 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.325 05:30:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.325 [2024-11-27 05:30:20.682640] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:25.325 [2024-11-27 05:30:20.682770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.325 [2024-11-27 05:30:20.841551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.325 [2024-11-27 05:30:20.948446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.325 [2024-11-27 05:30:20.948490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.325 [2024-11-27 05:30:20.948503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.325 [2024-11-27 05:30:20.948516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.325 [2024-11-27 05:30:20.948527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.325 [2024-11-27 05:30:20.952641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.325 [2024-11-27 05:30:20.952664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.325 [2024-11-27 05:30:20.952727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.325 [2024-11-27 05:30:20.952735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.325 [2024-11-27 05:30:21.567473] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f6a47f01940) succeed. 00:13:25.325 [2024-11-27 05:30:21.577119] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f6a47dbd940) succeed. 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.325 Null1 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.325 [2024-11-27 05:30:21.881499] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.325 Null2 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.325 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 Null3 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 Null4 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 05:30:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:13:25.585 00:13:25.585 Discovery Log Number of Records 6, Generation counter 6 00:13:25.585 =====Discovery Log Entry 0====== 00:13:25.585 trtype: rdma 00:13:25.585 adrfam: ipv4 00:13:25.585 subtype: current discovery subsystem 00:13:25.585 treq: not required 00:13:25.585 portid: 0 00:13:25.585 trsvcid: 4420 00:13:25.585 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:25.585 traddr: 192.168.100.8 00:13:25.585 eflags: explicit discovery connections, duplicate discovery information 00:13:25.585 rdma_prtype: not specified 00:13:25.585 rdma_qptype: connected 00:13:25.585 rdma_cms: rdma-cm 00:13:25.585 rdma_pkey: 0x0000 00:13:25.585 =====Discovery Log Entry 1====== 00:13:25.585 trtype: rdma 00:13:25.585 adrfam: ipv4 00:13:25.585 subtype: nvme subsystem 00:13:25.585 treq: not required 00:13:25.585 portid: 0 00:13:25.585 trsvcid: 4420 00:13:25.585 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:25.585 traddr: 192.168.100.8 00:13:25.585 eflags: none 00:13:25.585 rdma_prtype: not specified 00:13:25.585 rdma_qptype: connected 00:13:25.585 rdma_cms: rdma-cm 00:13:25.585 rdma_pkey: 0x0000 00:13:25.585 =====Discovery Log Entry 2====== 00:13:25.585 trtype: rdma 00:13:25.585 adrfam: ipv4 00:13:25.585 subtype: nvme subsystem 00:13:25.585 treq: not required 00:13:25.585 portid: 0 00:13:25.585 trsvcid: 4420 00:13:25.585 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:25.585 traddr: 192.168.100.8 00:13:25.585 eflags: none 00:13:25.585 rdma_prtype: not specified 00:13:25.585 rdma_qptype: connected 00:13:25.585 rdma_cms: rdma-cm 00:13:25.585 rdma_pkey: 0x0000 00:13:25.586 =====Discovery Log Entry 3====== 00:13:25.586 trtype: rdma 00:13:25.586 adrfam: ipv4 00:13:25.586 subtype: nvme subsystem 00:13:25.586 treq: not required 00:13:25.586 portid: 0 00:13:25.586 trsvcid: 4420 00:13:25.586 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:25.586 traddr: 192.168.100.8 00:13:25.586 eflags: none 00:13:25.586 rdma_prtype: not specified 00:13:25.586 rdma_qptype: connected 00:13:25.586 rdma_cms: rdma-cm 00:13:25.586 rdma_pkey: 0x0000 00:13:25.586 =====Discovery Log Entry 4====== 00:13:25.586 trtype: rdma 00:13:25.586 adrfam: ipv4 00:13:25.586 subtype: nvme subsystem 00:13:25.586 treq: not required 00:13:25.586 portid: 0 00:13:25.586 trsvcid: 4420 00:13:25.586 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:25.586 traddr: 192.168.100.8 00:13:25.586 eflags: none 00:13:25.586 rdma_prtype: not specified 00:13:25.586 rdma_qptype: connected 00:13:25.586 rdma_cms: rdma-cm 00:13:25.586 rdma_pkey: 0x0000 00:13:25.586 =====Discovery Log Entry 5====== 00:13:25.586 trtype: rdma 00:13:25.586 adrfam: ipv4 00:13:25.586 subtype: discovery subsystem referral 00:13:25.586 treq: not required 00:13:25.586 portid: 0 00:13:25.586 trsvcid: 4430 00:13:25.586 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:25.586 traddr: 192.168.100.8 00:13:25.586 eflags: none 00:13:25.586 rdma_prtype: unrecognized 00:13:25.586 rdma_qptype: unrecognized 00:13:25.586 rdma_cms: unrecognized 00:13:25.586 rdma_pkey: 0x0000 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:25.586 Perform nvmf subsystem discovery via RPC 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.586 [ 00:13:25.586 { 00:13:25.586 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:25.586 "subtype": "Discovery", 00:13:25.586 "listen_addresses": [ 00:13:25.586 { 00:13:25.586 "trtype": "RDMA", 00:13:25.586 "adrfam": "IPv4", 00:13:25.586 "traddr": "192.168.100.8", 00:13:25.586 "trsvcid": "4420" 00:13:25.586 } 00:13:25.586 ], 00:13:25.586 "allow_any_host": true, 00:13:25.586 "hosts": [] 00:13:25.586 }, 00:13:25.586 { 00:13:25.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.586 "subtype": "NVMe", 00:13:25.586 "listen_addresses": [ 00:13:25.586 { 00:13:25.586 "trtype": "RDMA", 00:13:25.586 "adrfam": "IPv4", 00:13:25.586 "traddr": "192.168.100.8", 00:13:25.586 "trsvcid": "4420" 00:13:25.586 } 00:13:25.586 ], 00:13:25.586 "allow_any_host": true, 00:13:25.586 "hosts": [], 00:13:25.586 "serial_number": "SPDK00000000000001", 00:13:25.586 "model_number": "SPDK bdev Controller", 00:13:25.586 "max_namespaces": 32, 00:13:25.586 "min_cntlid": 1, 00:13:25.586 "max_cntlid": 65519, 00:13:25.586 "namespaces": [ 00:13:25.586 { 00:13:25.586 "nsid": 1, 00:13:25.586 "bdev_name": "Null1", 00:13:25.586 "name": "Null1", 00:13:25.586 "nguid": "430F0B7BE1124FA4AD9B938BD7759AC2", 00:13:25.586 "uuid": "430f0b7b-e112-4fa4-ad9b-938bd7759ac2" 00:13:25.586 } 00:13:25.586 ] 00:13:25.586 }, 00:13:25.586 { 00:13:25.586 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:25.586 "subtype": "NVMe", 00:13:25.586 "listen_addresses": [ 00:13:25.586 { 00:13:25.586 "trtype": "RDMA", 00:13:25.586 "adrfam": "IPv4", 00:13:25.586 "traddr": "192.168.100.8", 00:13:25.586 "trsvcid": "4420" 00:13:25.586 } 00:13:25.586 ], 00:13:25.586 "allow_any_host": true, 00:13:25.586 "hosts": [], 00:13:25.586 "serial_number": "SPDK00000000000002", 00:13:25.586 "model_number": "SPDK bdev Controller", 00:13:25.586 "max_namespaces": 32, 00:13:25.586 "min_cntlid": 1, 00:13:25.586 "max_cntlid": 65519, 00:13:25.586 "namespaces": [ 00:13:25.586 { 00:13:25.586 "nsid": 1, 00:13:25.586 "bdev_name": "Null2", 00:13:25.586 "name": "Null2", 00:13:25.586 "nguid": "D55F2D35EDCB4CA7BF570591CB407E7F", 00:13:25.586 "uuid": "d55f2d35-edcb-4ca7-bf57-0591cb407e7f" 00:13:25.586 } 00:13:25.586 ] 00:13:25.586 }, 00:13:25.586 { 00:13:25.586 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:25.586 "subtype": "NVMe", 00:13:25.586 "listen_addresses": [ 00:13:25.586 { 00:13:25.586 "trtype": "RDMA", 00:13:25.586 "adrfam": "IPv4", 00:13:25.586 "traddr": "192.168.100.8", 00:13:25.586 "trsvcid": "4420" 00:13:25.586 } 00:13:25.586 ], 00:13:25.586 "allow_any_host": true, 00:13:25.586 "hosts": [], 00:13:25.586 "serial_number": "SPDK00000000000003", 00:13:25.586 "model_number": "SPDK bdev Controller", 00:13:25.586 "max_namespaces": 32, 00:13:25.586 "min_cntlid": 1, 00:13:25.586 "max_cntlid": 65519, 00:13:25.586 "namespaces": [ 00:13:25.586 { 00:13:25.586 "nsid": 1, 00:13:25.586 "bdev_name": "Null3", 00:13:25.586 "name": "Null3", 00:13:25.586 "nguid": "FC574BD9F5194EDBA29BD2C541E46492", 00:13:25.586 "uuid": "fc574bd9-f519-4edb-a29b-d2c541e46492" 00:13:25.586 } 00:13:25.586 ] 00:13:25.586 }, 00:13:25.586 { 00:13:25.586 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:25.586 "subtype": "NVMe", 00:13:25.586 "listen_addresses": [ 00:13:25.586 { 00:13:25.586 "trtype": "RDMA", 00:13:25.586 "adrfam": "IPv4", 00:13:25.586 "traddr": "192.168.100.8", 00:13:25.586 "trsvcid": "4420" 00:13:25.586 } 00:13:25.586 ], 00:13:25.586 "allow_any_host": true, 00:13:25.586 "hosts": [], 00:13:25.586 "serial_number": "SPDK00000000000004", 00:13:25.586 "model_number": "SPDK bdev Controller", 00:13:25.586 "max_namespaces": 32, 00:13:25.586 "min_cntlid": 1, 00:13:25.586 "max_cntlid": 65519, 00:13:25.586 "namespaces": [ 00:13:25.586 { 00:13:25.586 "nsid": 1, 00:13:25.586 "bdev_name": "Null4", 00:13:25.586 "name": "Null4", 00:13:25.586 "nguid": "70DB580EAF434B51B8674D8E5A01F54C", 00:13:25.586 "uuid": "70db580e-af43-4b51-b867-4d8e5a01f54c" 00:13:25.586 } 00:13:25.586 ] 00:13:25.586 } 00:13:25.586 ] 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.586 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:25.846 rmmod nvme_rdma 00:13:25.846 rmmod nvme_fabrics 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3268554 ']' 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3268554 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3268554 ']' 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3268554 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3268554 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3268554' 00:13:25.846 killing process with pid 3268554 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3268554 00:13:25.846 05:30:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3268554 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:27.754 00:13:27.754 real 0m12.507s 00:13:27.754 user 0m13.309s 00:13:27.754 sys 0m7.387s 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.754 ************************************ 00:13:27.754 END TEST nvmf_target_discovery 00:13:27.754 ************************************ 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.754 ************************************ 00:13:27.754 START TEST nvmf_referrals 00:13:27.754 ************************************ 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:27.754 * Looking for test storage... 00:13:27.754 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.754 --rc genhtml_branch_coverage=1 00:13:27.754 --rc genhtml_function_coverage=1 00:13:27.754 --rc genhtml_legend=1 00:13:27.754 --rc geninfo_all_blocks=1 00:13:27.754 --rc geninfo_unexecuted_blocks=1 00:13:27.754 00:13:27.754 ' 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.754 --rc genhtml_branch_coverage=1 00:13:27.754 --rc genhtml_function_coverage=1 00:13:27.754 --rc genhtml_legend=1 00:13:27.754 --rc geninfo_all_blocks=1 00:13:27.754 --rc geninfo_unexecuted_blocks=1 00:13:27.754 00:13:27.754 ' 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.754 --rc genhtml_branch_coverage=1 00:13:27.754 --rc genhtml_function_coverage=1 00:13:27.754 --rc genhtml_legend=1 00:13:27.754 --rc geninfo_all_blocks=1 00:13:27.754 --rc geninfo_unexecuted_blocks=1 00:13:27.754 00:13:27.754 ' 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.754 --rc genhtml_branch_coverage=1 00:13:27.754 --rc genhtml_function_coverage=1 00:13:27.754 --rc genhtml_legend=1 00:13:27.754 --rc geninfo_all_blocks=1 00:13:27.754 --rc geninfo_unexecuted_blocks=1 00:13:27.754 00:13:27.754 ' 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.754 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.755 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:27.755 05:30:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:35.877 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:35.877 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:35.877 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:35.877 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:35.877 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:35.878 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:35.878 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:35.878 altname enp217s0f0np0 00:13:35.878 altname ens818f0np0 00:13:35.878 inet 192.168.100.8/24 scope global mlx_0_0 00:13:35.878 valid_lft forever preferred_lft forever 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:35.878 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:36.137 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:36.137 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:36.137 altname enp217s0f1np1 00:13:36.137 altname ens818f1np1 00:13:36.137 inet 192.168.100.9/24 scope global mlx_0_1 00:13:36.137 valid_lft forever preferred_lft forever 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:36.137 192.168.100.9' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:36.137 192.168.100.9' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:36.137 192.168.100.9' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3273171 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3273171 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3273171 ']' 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.137 05:30:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.137 [2024-11-27 05:30:32.667055] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:36.137 [2024-11-27 05:30:32.667150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.395 [2024-11-27 05:30:32.821067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.395 [2024-11-27 05:30:32.920885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.395 [2024-11-27 05:30:32.920934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.395 [2024-11-27 05:30:32.920946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.395 [2024-11-27 05:30:32.920975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.395 [2024-11-27 05:30:32.920985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.395 [2024-11-27 05:30:32.923406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.395 [2024-11-27 05:30:32.923428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.395 [2024-11-27 05:30:32.923496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.396 [2024-11-27 05:30:32.923503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.963 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.963 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:36.963 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:36.963 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:36.963 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.963 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.963 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:36.963 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.963 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.221 [2024-11-27 05:30:33.567200] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f9c15bbd940) succeed. 00:13:37.221 [2024-11-27 05:30:33.577342] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f9c15b79940) succeed. 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.479 [2024-11-27 05:30:33.837683] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.479 05:30:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.479 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.738 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:37.996 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:38.254 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.255 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:38.512 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:38.512 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:38.512 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:38.512 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:38.512 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.512 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:38.512 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:38.512 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:38.512 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.512 05:30:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.512 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.512 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:38.512 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.512 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.512 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.512 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.512 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:38.512 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:38.512 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.512 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.513 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.513 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.513 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:38.771 rmmod nvme_rdma 00:13:38.771 rmmod nvme_fabrics 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3273171 ']' 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3273171 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3273171 ']' 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3273171 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3273171 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3273171' 00:13:38.771 killing process with pid 3273171 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3273171 00:13:38.771 05:30:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3273171 00:13:40.671 05:30:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.671 05:30:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:40.671 00:13:40.671 real 0m12.820s 00:13:40.671 user 0m17.573s 00:13:40.671 sys 0m7.325s 00:13:40.671 05:30:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.671 05:30:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.671 ************************************ 00:13:40.671 END TEST nvmf_referrals 00:13:40.671 ************************************ 00:13:40.671 05:30:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:13:40.671 05:30:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:40.671 05:30:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.671 05:30:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.671 ************************************ 00:13:40.671 START TEST nvmf_connect_disconnect 00:13:40.671 ************************************ 00:13:40.671 05:30:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:13:40.671 * Looking for test storage... 00:13:40.671 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:40.671 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:40.671 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:40.671 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:40.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.672 --rc genhtml_branch_coverage=1 00:13:40.672 --rc genhtml_function_coverage=1 00:13:40.672 --rc genhtml_legend=1 00:13:40.672 --rc geninfo_all_blocks=1 00:13:40.672 --rc geninfo_unexecuted_blocks=1 00:13:40.672 00:13:40.672 ' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:40.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.672 --rc genhtml_branch_coverage=1 00:13:40.672 --rc genhtml_function_coverage=1 00:13:40.672 --rc genhtml_legend=1 00:13:40.672 --rc geninfo_all_blocks=1 00:13:40.672 --rc geninfo_unexecuted_blocks=1 00:13:40.672 00:13:40.672 ' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:40.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.672 --rc genhtml_branch_coverage=1 00:13:40.672 --rc genhtml_function_coverage=1 00:13:40.672 --rc genhtml_legend=1 00:13:40.672 --rc geninfo_all_blocks=1 00:13:40.672 --rc geninfo_unexecuted_blocks=1 00:13:40.672 00:13:40.672 ' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:40.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.672 --rc genhtml_branch_coverage=1 00:13:40.672 --rc genhtml_function_coverage=1 00:13:40.672 --rc genhtml_legend=1 00:13:40.672 --rc geninfo_all_blocks=1 00:13:40.672 --rc geninfo_unexecuted_blocks=1 00:13:40.672 00:13:40.672 ' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:40.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:40.672 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:40.673 05:30:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:48.784 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:48.784 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:48.785 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:48.785 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:48.785 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:48.785 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:48.785 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:48.785 altname enp217s0f0np0 00:13:48.785 altname ens818f0np0 00:13:48.785 inet 192.168.100.8/24 scope global mlx_0_0 00:13:48.785 valid_lft forever preferred_lft forever 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:48.785 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:48.785 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:48.785 altname enp217s0f1np1 00:13:48.785 altname ens818f1np1 00:13:48.785 inet 192.168.100.9/24 scope global mlx_0_1 00:13:48.785 valid_lft forever preferred_lft forever 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:48.785 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:48.786 05:30:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:48.786 192.168.100.9' 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:48.786 192.168.100.9' 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:48.786 192.168.100.9' 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3277979 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3277979 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3277979 ']' 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.786 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.786 [2024-11-27 05:30:45.151188] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:48.786 [2024-11-27 05:30:45.151288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.786 [2024-11-27 05:30:45.304991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.044 [2024-11-27 05:30:45.403816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.044 [2024-11-27 05:30:45.403864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.044 [2024-11-27 05:30:45.403876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.044 [2024-11-27 05:30:45.403888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.044 [2024-11-27 05:30:45.403897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.044 [2024-11-27 05:30:45.406371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.044 [2024-11-27 05:30:45.406444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.044 [2024-11-27 05:30:45.406505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.044 [2024-11-27 05:30:45.406513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.612 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.612 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:49.612 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.613 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.613 05:30:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.613 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.613 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:49.613 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.613 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.613 [2024-11-27 05:30:46.011701] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:49.613 [2024-11-27 05:30:46.048088] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fb2efb31940) succeed. 00:13:49.613 [2024-11-27 05:30:46.058181] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fb2ef1bd940) succeed. 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.872 [2024-11-27 05:30:46.296084] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:49.872 05:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:53.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:49.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:55.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:05.680 rmmod nvme_rdma 00:19:05.680 rmmod nvme_fabrics 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3277979 ']' 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3277979 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3277979 ']' 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3277979 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3277979 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3277979' 00:19:05.680 killing process with pid 3277979 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3277979 00:19:05.680 05:36:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3277979 00:19:07.060 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:07.061 00:19:07.061 real 5m26.341s 00:19:07.061 user 21m7.330s 00:19:07.061 sys 0m19.464s 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:19:07.061 ************************************ 00:19:07.061 END TEST nvmf_connect_disconnect 00:19:07.061 ************************************ 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.061 ************************************ 00:19:07.061 START TEST nvmf_multitarget 00:19:07.061 ************************************ 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:19:07.061 * Looking for test storage... 00:19:07.061 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:07.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.061 --rc genhtml_branch_coverage=1 00:19:07.061 --rc genhtml_function_coverage=1 00:19:07.061 --rc genhtml_legend=1 00:19:07.061 --rc geninfo_all_blocks=1 00:19:07.061 --rc geninfo_unexecuted_blocks=1 00:19:07.061 00:19:07.061 ' 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:07.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.061 --rc genhtml_branch_coverage=1 00:19:07.061 --rc genhtml_function_coverage=1 00:19:07.061 --rc genhtml_legend=1 00:19:07.061 --rc geninfo_all_blocks=1 00:19:07.061 --rc geninfo_unexecuted_blocks=1 00:19:07.061 00:19:07.061 ' 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:07.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.061 --rc genhtml_branch_coverage=1 00:19:07.061 --rc genhtml_function_coverage=1 00:19:07.061 --rc genhtml_legend=1 00:19:07.061 --rc geninfo_all_blocks=1 00:19:07.061 --rc geninfo_unexecuted_blocks=1 00:19:07.061 00:19:07.061 ' 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:07.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.061 --rc genhtml_branch_coverage=1 00:19:07.061 --rc genhtml_function_coverage=1 00:19:07.061 --rc genhtml_legend=1 00:19:07.061 --rc geninfo_all_blocks=1 00:19:07.061 --rc geninfo_unexecuted_blocks=1 00:19:07.061 00:19:07.061 ' 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.061 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.322 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:19:07.322 05:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.446 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:15.447 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:15.447 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:15.447 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:15.447 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:15.447 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:15.447 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:15.447 altname enp217s0f0np0 00:19:15.447 altname ens818f0np0 00:19:15.447 inet 192.168.100.8/24 scope global mlx_0_0 00:19:15.447 valid_lft forever preferred_lft forever 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:15.447 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:15.447 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:15.448 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:15.448 altname enp217s0f1np1 00:19:15.448 altname ens818f1np1 00:19:15.448 inet 192.168.100.9/24 scope global mlx_0_1 00:19:15.448 valid_lft forever preferred_lft forever 00:19:15.448 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:19:15.448 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:15.448 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:15.448 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:15.448 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:15.448 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:15.448 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:15.448 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:15.448 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:15.448 05:36:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:15.448 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:15.707 192.168.100.9' 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:15.707 192.168.100.9' 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:15.707 192.168.100.9' 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3337852 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3337852 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3337852 ']' 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.707 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:15.707 [2024-11-27 05:36:12.181311] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:15.707 [2024-11-27 05:36:12.181407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.966 [2024-11-27 05:36:12.336994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:15.966 [2024-11-27 05:36:12.437119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.966 [2024-11-27 05:36:12.437166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.966 [2024-11-27 05:36:12.437178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.966 [2024-11-27 05:36:12.437191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.966 [2024-11-27 05:36:12.437201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.966 [2024-11-27 05:36:12.439529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.966 [2024-11-27 05:36:12.439602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.966 [2024-11-27 05:36:12.439669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.966 [2024-11-27 05:36:12.439675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.533 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.533 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:19:16.533 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.533 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.533 05:36:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:16.533 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.533 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:16.533 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:16.533 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:19:16.792 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:19:16.792 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:19:16.792 "nvmf_tgt_1" 00:19:16.792 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:19:16.792 "nvmf_tgt_2" 00:19:16.792 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:16.792 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:19:17.051 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:19:17.051 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:19:17.051 true 00:19:17.051 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:19:17.311 true 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:17.311 rmmod nvme_rdma 00:19:17.311 rmmod nvme_fabrics 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3337852 ']' 00:19:17.311 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3337852 00:19:17.312 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3337852 ']' 00:19:17.312 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3337852 00:19:17.312 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:19:17.312 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.312 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3337852 00:19:17.571 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.571 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.571 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3337852' 00:19:17.571 killing process with pid 3337852 00:19:17.571 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3337852 00:19:17.571 05:36:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3337852 00:19:18.509 05:36:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.509 05:36:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:18.509 00:19:18.509 real 0m11.566s 00:19:18.509 user 0m13.055s 00:19:18.509 sys 0m6.999s 00:19:18.509 05:36:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.509 05:36:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:18.509 ************************************ 00:19:18.509 END TEST nvmf_multitarget 00:19:18.509 ************************************ 00:19:18.509 05:36:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:19:18.509 05:36:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:18.509 05:36:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.509 05:36:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:18.509 ************************************ 00:19:18.509 START TEST nvmf_rpc 00:19:18.509 ************************************ 00:19:18.509 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:19:18.768 * Looking for test storage... 00:19:18.768 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.768 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:18.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.769 --rc genhtml_branch_coverage=1 00:19:18.769 --rc genhtml_function_coverage=1 00:19:18.769 --rc genhtml_legend=1 00:19:18.769 --rc geninfo_all_blocks=1 00:19:18.769 --rc geninfo_unexecuted_blocks=1 00:19:18.769 00:19:18.769 ' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:18.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.769 --rc genhtml_branch_coverage=1 00:19:18.769 --rc genhtml_function_coverage=1 00:19:18.769 --rc genhtml_legend=1 00:19:18.769 --rc geninfo_all_blocks=1 00:19:18.769 --rc geninfo_unexecuted_blocks=1 00:19:18.769 00:19:18.769 ' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:18.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.769 --rc genhtml_branch_coverage=1 00:19:18.769 --rc genhtml_function_coverage=1 00:19:18.769 --rc genhtml_legend=1 00:19:18.769 --rc geninfo_all_blocks=1 00:19:18.769 --rc geninfo_unexecuted_blocks=1 00:19:18.769 00:19:18.769 ' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:18.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.769 --rc genhtml_branch_coverage=1 00:19:18.769 --rc genhtml_function_coverage=1 00:19:18.769 --rc genhtml_legend=1 00:19:18.769 --rc geninfo_all_blocks=1 00:19:18.769 --rc geninfo_unexecuted_blocks=1 00:19:18.769 00:19:18.769 ' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:18.769 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:19:18.769 05:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:26.895 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.895 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:19:26.895 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:26.895 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:26.895 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:26.895 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:26.895 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:26.895 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:19:26.895 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:27.155 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:27.155 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:27.156 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:27.156 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:27.156 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:27.156 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:27.156 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:27.156 altname enp217s0f0np0 00:19:27.156 altname ens818f0np0 00:19:27.156 inet 192.168.100.8/24 scope global mlx_0_0 00:19:27.156 valid_lft forever preferred_lft forever 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:27.156 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:27.156 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:27.156 altname enp217s0f1np1 00:19:27.156 altname ens818f1np1 00:19:27.156 inet 192.168.100.9/24 scope global mlx_0_1 00:19:27.156 valid_lft forever preferred_lft forever 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:27.156 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:27.157 192.168.100.9' 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:27.157 192.168.100.9' 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:27.157 192.168.100.9' 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3342555 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:27.157 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3342555 00:19:27.416 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3342555 ']' 00:19:27.416 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.416 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.416 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.416 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.416 05:36:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:27.416 [2024-11-27 05:36:23.828218] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:27.416 [2024-11-27 05:36:23.828313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.416 [2024-11-27 05:36:23.981540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:27.676 [2024-11-27 05:36:24.079933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.676 [2024-11-27 05:36:24.079980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.676 [2024-11-27 05:36:24.079992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.676 [2024-11-27 05:36:24.080004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.676 [2024-11-27 05:36:24.080013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.676 [2024-11-27 05:36:24.082574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.676 [2024-11-27 05:36:24.082655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.676 [2024-11-27 05:36:24.082712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.676 [2024-11-27 05:36:24.082720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.244 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:19:28.244 "tick_rate": 2500000000, 00:19:28.244 "poll_groups": [ 00:19:28.244 { 00:19:28.244 "name": "nvmf_tgt_poll_group_000", 00:19:28.244 "admin_qpairs": 0, 00:19:28.244 "io_qpairs": 0, 00:19:28.244 "current_admin_qpairs": 0, 00:19:28.244 "current_io_qpairs": 0, 00:19:28.244 "pending_bdev_io": 0, 00:19:28.244 "completed_nvme_io": 0, 00:19:28.244 "transports": [] 00:19:28.244 }, 00:19:28.244 { 00:19:28.244 "name": "nvmf_tgt_poll_group_001", 00:19:28.244 "admin_qpairs": 0, 00:19:28.244 "io_qpairs": 0, 00:19:28.244 "current_admin_qpairs": 0, 00:19:28.245 "current_io_qpairs": 0, 00:19:28.245 "pending_bdev_io": 0, 00:19:28.245 "completed_nvme_io": 0, 00:19:28.245 "transports": [] 00:19:28.245 }, 00:19:28.245 { 00:19:28.245 "name": "nvmf_tgt_poll_group_002", 00:19:28.245 "admin_qpairs": 0, 00:19:28.245 "io_qpairs": 0, 00:19:28.245 "current_admin_qpairs": 0, 00:19:28.245 "current_io_qpairs": 0, 00:19:28.245 "pending_bdev_io": 0, 00:19:28.245 "completed_nvme_io": 0, 00:19:28.245 "transports": [] 00:19:28.245 }, 00:19:28.245 { 00:19:28.245 "name": "nvmf_tgt_poll_group_003", 00:19:28.245 "admin_qpairs": 0, 00:19:28.245 "io_qpairs": 0, 00:19:28.245 "current_admin_qpairs": 0, 00:19:28.245 "current_io_qpairs": 0, 00:19:28.245 "pending_bdev_io": 0, 00:19:28.245 "completed_nvme_io": 0, 00:19:28.245 "transports": [] 00:19:28.245 } 00:19:28.245 ] 00:19:28.245 }' 00:19:28.245 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:19:28.245 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:19:28.245 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:19:28.245 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:28.245 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:19:28.245 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:19:28.245 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:19:28.245 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:28.245 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.245 05:36:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.504 [2024-11-27 05:36:24.835338] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f03f6db3940) succeed. 00:19:28.504 [2024-11-27 05:36:24.845115] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f03f6d6f940) succeed. 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:19:28.764 "tick_rate": 2500000000, 00:19:28.764 "poll_groups": [ 00:19:28.764 { 00:19:28.764 "name": "nvmf_tgt_poll_group_000", 00:19:28.764 "admin_qpairs": 0, 00:19:28.764 "io_qpairs": 0, 00:19:28.764 "current_admin_qpairs": 0, 00:19:28.764 "current_io_qpairs": 0, 00:19:28.764 "pending_bdev_io": 0, 00:19:28.764 "completed_nvme_io": 0, 00:19:28.764 "transports": [ 00:19:28.764 { 00:19:28.764 "trtype": "RDMA", 00:19:28.764 "pending_data_buffer": 0, 00:19:28.764 "devices": [ 00:19:28.764 { 00:19:28.764 "name": "mlx5_0", 00:19:28.764 "polls": 31432, 00:19:28.764 "idle_polls": 31432, 00:19:28.764 "completions": 0, 00:19:28.764 "requests": 0, 00:19:28.764 "request_latency": 0, 00:19:28.764 "pending_free_request": 0, 00:19:28.764 "pending_rdma_read": 0, 00:19:28.764 "pending_rdma_write": 0, 00:19:28.764 "pending_rdma_send": 0, 00:19:28.764 "total_send_wrs": 0, 00:19:28.764 "send_doorbell_updates": 0, 00:19:28.764 "total_recv_wrs": 4096, 00:19:28.764 "recv_doorbell_updates": 1 00:19:28.764 }, 00:19:28.764 { 00:19:28.764 "name": "mlx5_1", 00:19:28.764 "polls": 31432, 00:19:28.764 "idle_polls": 31432, 00:19:28.764 "completions": 0, 00:19:28.764 "requests": 0, 00:19:28.764 "request_latency": 0, 00:19:28.764 "pending_free_request": 0, 00:19:28.764 "pending_rdma_read": 0, 00:19:28.764 "pending_rdma_write": 0, 00:19:28.764 "pending_rdma_send": 0, 00:19:28.764 "total_send_wrs": 0, 00:19:28.764 "send_doorbell_updates": 0, 00:19:28.764 "total_recv_wrs": 4096, 00:19:28.764 "recv_doorbell_updates": 1 00:19:28.764 } 00:19:28.764 ] 00:19:28.764 } 00:19:28.764 ] 00:19:28.764 }, 00:19:28.764 { 00:19:28.764 "name": "nvmf_tgt_poll_group_001", 00:19:28.764 "admin_qpairs": 0, 00:19:28.764 "io_qpairs": 0, 00:19:28.764 "current_admin_qpairs": 0, 00:19:28.764 "current_io_qpairs": 0, 00:19:28.764 "pending_bdev_io": 0, 00:19:28.764 "completed_nvme_io": 0, 00:19:28.764 "transports": [ 00:19:28.764 { 00:19:28.764 "trtype": "RDMA", 00:19:28.764 "pending_data_buffer": 0, 00:19:28.764 "devices": [ 00:19:28.764 { 00:19:28.764 "name": "mlx5_0", 00:19:28.764 "polls": 20385, 00:19:28.764 "idle_polls": 20385, 00:19:28.764 "completions": 0, 00:19:28.764 "requests": 0, 00:19:28.764 "request_latency": 0, 00:19:28.764 "pending_free_request": 0, 00:19:28.764 "pending_rdma_read": 0, 00:19:28.764 "pending_rdma_write": 0, 00:19:28.764 "pending_rdma_send": 0, 00:19:28.764 "total_send_wrs": 0, 00:19:28.764 "send_doorbell_updates": 0, 00:19:28.764 "total_recv_wrs": 4096, 00:19:28.764 "recv_doorbell_updates": 1 00:19:28.764 }, 00:19:28.764 { 00:19:28.764 "name": "mlx5_1", 00:19:28.764 "polls": 20385, 00:19:28.764 "idle_polls": 20385, 00:19:28.764 "completions": 0, 00:19:28.764 "requests": 0, 00:19:28.764 "request_latency": 0, 00:19:28.764 "pending_free_request": 0, 00:19:28.764 "pending_rdma_read": 0, 00:19:28.764 "pending_rdma_write": 0, 00:19:28.764 "pending_rdma_send": 0, 00:19:28.764 "total_send_wrs": 0, 00:19:28.764 "send_doorbell_updates": 0, 00:19:28.764 "total_recv_wrs": 4096, 00:19:28.764 "recv_doorbell_updates": 1 00:19:28.764 } 00:19:28.764 ] 00:19:28.764 } 00:19:28.764 ] 00:19:28.764 }, 00:19:28.764 { 00:19:28.764 "name": "nvmf_tgt_poll_group_002", 00:19:28.764 "admin_qpairs": 0, 00:19:28.764 "io_qpairs": 0, 00:19:28.764 "current_admin_qpairs": 0, 00:19:28.764 "current_io_qpairs": 0, 00:19:28.764 "pending_bdev_io": 0, 00:19:28.764 "completed_nvme_io": 0, 00:19:28.764 "transports": [ 00:19:28.764 { 00:19:28.764 "trtype": "RDMA", 00:19:28.764 "pending_data_buffer": 0, 00:19:28.764 "devices": [ 00:19:28.764 { 00:19:28.764 "name": "mlx5_0", 00:19:28.764 "polls": 10492, 00:19:28.764 "idle_polls": 10492, 00:19:28.764 "completions": 0, 00:19:28.764 "requests": 0, 00:19:28.764 "request_latency": 0, 00:19:28.764 "pending_free_request": 0, 00:19:28.764 "pending_rdma_read": 0, 00:19:28.764 "pending_rdma_write": 0, 00:19:28.764 "pending_rdma_send": 0, 00:19:28.764 "total_send_wrs": 0, 00:19:28.764 "send_doorbell_updates": 0, 00:19:28.764 "total_recv_wrs": 4096, 00:19:28.764 "recv_doorbell_updates": 1 00:19:28.764 }, 00:19:28.764 { 00:19:28.764 "name": "mlx5_1", 00:19:28.764 "polls": 10492, 00:19:28.764 "idle_polls": 10492, 00:19:28.764 "completions": 0, 00:19:28.764 "requests": 0, 00:19:28.764 "request_latency": 0, 00:19:28.764 "pending_free_request": 0, 00:19:28.764 "pending_rdma_read": 0, 00:19:28.764 "pending_rdma_write": 0, 00:19:28.764 "pending_rdma_send": 0, 00:19:28.764 "total_send_wrs": 0, 00:19:28.764 "send_doorbell_updates": 0, 00:19:28.764 "total_recv_wrs": 4096, 00:19:28.764 "recv_doorbell_updates": 1 00:19:28.764 } 00:19:28.764 ] 00:19:28.764 } 00:19:28.764 ] 00:19:28.764 }, 00:19:28.764 { 00:19:28.764 "name": "nvmf_tgt_poll_group_003", 00:19:28.764 "admin_qpairs": 0, 00:19:28.764 "io_qpairs": 0, 00:19:28.764 "current_admin_qpairs": 0, 00:19:28.764 "current_io_qpairs": 0, 00:19:28.764 "pending_bdev_io": 0, 00:19:28.764 "completed_nvme_io": 0, 00:19:28.764 "transports": [ 00:19:28.764 { 00:19:28.764 "trtype": "RDMA", 00:19:28.764 "pending_data_buffer": 0, 00:19:28.764 "devices": [ 00:19:28.764 { 00:19:28.764 "name": "mlx5_0", 00:19:28.764 "polls": 813, 00:19:28.764 "idle_polls": 813, 00:19:28.764 "completions": 0, 00:19:28.764 "requests": 0, 00:19:28.764 "request_latency": 0, 00:19:28.764 "pending_free_request": 0, 00:19:28.764 "pending_rdma_read": 0, 00:19:28.764 "pending_rdma_write": 0, 00:19:28.764 "pending_rdma_send": 0, 00:19:28.764 "total_send_wrs": 0, 00:19:28.764 "send_doorbell_updates": 0, 00:19:28.764 "total_recv_wrs": 4096, 00:19:28.764 "recv_doorbell_updates": 1 00:19:28.764 }, 00:19:28.764 { 00:19:28.764 "name": "mlx5_1", 00:19:28.764 "polls": 813, 00:19:28.764 "idle_polls": 813, 00:19:28.764 "completions": 0, 00:19:28.764 "requests": 0, 00:19:28.764 "request_latency": 0, 00:19:28.764 "pending_free_request": 0, 00:19:28.764 "pending_rdma_read": 0, 00:19:28.764 "pending_rdma_write": 0, 00:19:28.764 "pending_rdma_send": 0, 00:19:28.764 "total_send_wrs": 0, 00:19:28.764 "send_doorbell_updates": 0, 00:19:28.764 "total_recv_wrs": 4096, 00:19:28.764 "recv_doorbell_updates": 1 00:19:28.764 } 00:19:28.764 ] 00:19:28.764 } 00:19:28.764 ] 00:19:28.764 } 00:19:28.764 ] 00:19:28.764 }' 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:19:28.764 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:19:28.765 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:29.024 Malloc1 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:19:29.024 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:29.025 [2024-11-27 05:36:25.488410] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:19:29.025 [2024-11-27 05:36:25.534803] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:19:29.025 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:29.025 could not add new controller: failed to write to nvme-fabrics device 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.025 05:36:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:30.403 05:36:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:19:30.403 05:36:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:30.403 05:36:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:30.403 05:36:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:30.403 05:36:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:32.308 05:36:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:32.308 05:36:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:32.308 05:36:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:32.308 05:36:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:32.308 05:36:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:32.308 05:36:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:32.308 05:36:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:33.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:33.243 [2024-11-27 05:36:29.606306] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:19:33.243 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:33.243 could not add new controller: failed to write to nvme-fabrics device 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.243 05:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:34.180 05:36:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:19:34.180 05:36:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:34.180 05:36:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:34.180 05:36:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:34.180 05:36:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:36.086 05:36:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:36.087 05:36:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:36.087 05:36:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:36.087 05:36:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:36.345 05:36:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:36.345 05:36:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:36.345 05:36:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:37.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:37.282 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:37.283 [2024-11-27 05:36:33.676791] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.283 05:36:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:38.220 05:36:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:38.220 05:36:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:38.220 05:36:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:38.220 05:36:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:38.220 05:36:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:40.124 05:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:40.125 05:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:40.125 05:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:40.384 05:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:40.384 05:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:40.384 05:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:40.384 05:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:41.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.320 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.321 [2024-11-27 05:36:37.710986] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.321 05:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:42.378 05:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:42.378 05:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:42.378 05:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:42.378 05:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:42.378 05:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:44.315 05:36:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:44.315 05:36:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:44.315 05:36:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:44.315 05:36:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:44.315 05:36:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:44.315 05:36:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:44.315 05:36:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:45.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.252 [2024-11-27 05:36:41.764743] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.252 05:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:46.187 05:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:46.188 05:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:46.188 05:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:46.188 05:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:46.188 05:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:48.719 05:36:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:48.719 05:36:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:48.719 05:36:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:48.719 05:36:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:48.719 05:36:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:48.719 05:36:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:48.719 05:36:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:49.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:49.287 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:49.288 [2024-11-27 05:36:45.801453] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.288 05:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:50.222 05:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:50.222 05:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:50.222 05:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:50.222 05:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:50.222 05:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:52.752 05:36:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:52.752 05:36:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:52.752 05:36:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:52.752 05:36:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:52.752 05:36:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:52.752 05:36:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:52.752 05:36:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:53.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.321 [2024-11-27 05:36:49.837385] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.321 05:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:54.257 05:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:54.257 05:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:54.257 05:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:54.258 05:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:54.258 05:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:56.783 05:36:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:56.783 05:36:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:56.783 05:36:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:56.783 05:36:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:56.783 05:36:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:56.783 05:36:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:56.783 05:36:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:57.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 [2024-11-27 05:36:53.861417] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 [2024-11-27 05:36:53.913604] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.348 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 [2024-11-27 05:36:53.965734] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 [2024-11-27 05:36:54.017970] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 [2024-11-27 05:36:54.070153] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.607 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:19:57.607 "tick_rate": 2500000000, 00:19:57.607 "poll_groups": [ 00:19:57.607 { 00:19:57.607 "name": "nvmf_tgt_poll_group_000", 00:19:57.607 "admin_qpairs": 2, 00:19:57.607 "io_qpairs": 27, 00:19:57.607 "current_admin_qpairs": 0, 00:19:57.607 "current_io_qpairs": 0, 00:19:57.607 "pending_bdev_io": 0, 00:19:57.607 "completed_nvme_io": 127, 00:19:57.607 "transports": [ 00:19:57.607 { 00:19:57.607 "trtype": "RDMA", 00:19:57.607 "pending_data_buffer": 0, 00:19:57.608 "devices": [ 00:19:57.608 { 00:19:57.608 "name": "mlx5_0", 00:19:57.608 "polls": 3440619, 00:19:57.608 "idle_polls": 3440291, 00:19:57.608 "completions": 369, 00:19:57.608 "requests": 184, 00:19:57.608 "request_latency": 47012542, 00:19:57.608 "pending_free_request": 0, 00:19:57.608 "pending_rdma_read": 0, 00:19:57.608 "pending_rdma_write": 0, 00:19:57.608 "pending_rdma_send": 0, 00:19:57.608 "total_send_wrs": 311, 00:19:57.608 "send_doorbell_updates": 162, 00:19:57.608 "total_recv_wrs": 4280, 00:19:57.608 "recv_doorbell_updates": 162 00:19:57.608 }, 00:19:57.608 { 00:19:57.608 "name": "mlx5_1", 00:19:57.608 "polls": 3440619, 00:19:57.608 "idle_polls": 3440619, 00:19:57.608 "completions": 0, 00:19:57.608 "requests": 0, 00:19:57.608 "request_latency": 0, 00:19:57.608 "pending_free_request": 0, 00:19:57.608 "pending_rdma_read": 0, 00:19:57.608 "pending_rdma_write": 0, 00:19:57.608 "pending_rdma_send": 0, 00:19:57.608 "total_send_wrs": 0, 00:19:57.608 "send_doorbell_updates": 0, 00:19:57.608 "total_recv_wrs": 4096, 00:19:57.608 "recv_doorbell_updates": 1 00:19:57.608 } 00:19:57.608 ] 00:19:57.608 } 00:19:57.608 ] 00:19:57.608 }, 00:19:57.608 { 00:19:57.608 "name": "nvmf_tgt_poll_group_001", 00:19:57.608 "admin_qpairs": 2, 00:19:57.608 "io_qpairs": 26, 00:19:57.608 "current_admin_qpairs": 0, 00:19:57.608 "current_io_qpairs": 0, 00:19:57.608 "pending_bdev_io": 0, 00:19:57.608 "completed_nvme_io": 125, 00:19:57.608 "transports": [ 00:19:57.608 { 00:19:57.608 "trtype": "RDMA", 00:19:57.608 "pending_data_buffer": 0, 00:19:57.608 "devices": [ 00:19:57.608 { 00:19:57.608 "name": "mlx5_0", 00:19:57.608 "polls": 3345945, 00:19:57.608 "idle_polls": 3345631, 00:19:57.608 "completions": 356, 00:19:57.608 "requests": 178, 00:19:57.608 "request_latency": 45950570, 00:19:57.608 "pending_free_request": 0, 00:19:57.608 "pending_rdma_read": 0, 00:19:57.608 "pending_rdma_write": 0, 00:19:57.608 "pending_rdma_send": 0, 00:19:57.608 "total_send_wrs": 302, 00:19:57.608 "send_doorbell_updates": 155, 00:19:57.608 "total_recv_wrs": 4274, 00:19:57.608 "recv_doorbell_updates": 156 00:19:57.608 }, 00:19:57.608 { 00:19:57.608 "name": "mlx5_1", 00:19:57.608 "polls": 3345945, 00:19:57.608 "idle_polls": 3345945, 00:19:57.608 "completions": 0, 00:19:57.608 "requests": 0, 00:19:57.608 "request_latency": 0, 00:19:57.608 "pending_free_request": 0, 00:19:57.608 "pending_rdma_read": 0, 00:19:57.608 "pending_rdma_write": 0, 00:19:57.608 "pending_rdma_send": 0, 00:19:57.608 "total_send_wrs": 0, 00:19:57.608 "send_doorbell_updates": 0, 00:19:57.608 "total_recv_wrs": 4096, 00:19:57.608 "recv_doorbell_updates": 1 00:19:57.608 } 00:19:57.608 ] 00:19:57.608 } 00:19:57.608 ] 00:19:57.608 }, 00:19:57.608 { 00:19:57.608 "name": "nvmf_tgt_poll_group_002", 00:19:57.608 "admin_qpairs": 1, 00:19:57.608 "io_qpairs": 26, 00:19:57.608 "current_admin_qpairs": 0, 00:19:57.608 "current_io_qpairs": 0, 00:19:57.608 "pending_bdev_io": 0, 00:19:57.608 "completed_nvme_io": 77, 00:19:57.608 "transports": [ 00:19:57.608 { 00:19:57.608 "trtype": "RDMA", 00:19:57.608 "pending_data_buffer": 0, 00:19:57.608 "devices": [ 00:19:57.608 { 00:19:57.608 "name": "mlx5_0", 00:19:57.608 "polls": 3477386, 00:19:57.608 "idle_polls": 3477196, 00:19:57.608 "completions": 211, 00:19:57.608 "requests": 105, 00:19:57.608 "request_latency": 26308576, 00:19:57.608 "pending_free_request": 0, 00:19:57.608 "pending_rdma_read": 0, 00:19:57.608 "pending_rdma_write": 0, 00:19:57.608 "pending_rdma_send": 0, 00:19:57.608 "total_send_wrs": 170, 00:19:57.608 "send_doorbell_updates": 93, 00:19:57.608 "total_recv_wrs": 4201, 00:19:57.608 "recv_doorbell_updates": 93 00:19:57.608 }, 00:19:57.608 { 00:19:57.608 "name": "mlx5_1", 00:19:57.608 "polls": 3477386, 00:19:57.608 "idle_polls": 3477386, 00:19:57.608 "completions": 0, 00:19:57.608 "requests": 0, 00:19:57.608 "request_latency": 0, 00:19:57.608 "pending_free_request": 0, 00:19:57.608 "pending_rdma_read": 0, 00:19:57.608 "pending_rdma_write": 0, 00:19:57.608 "pending_rdma_send": 0, 00:19:57.608 "total_send_wrs": 0, 00:19:57.608 "send_doorbell_updates": 0, 00:19:57.608 "total_recv_wrs": 4096, 00:19:57.608 "recv_doorbell_updates": 1 00:19:57.608 } 00:19:57.608 ] 00:19:57.608 } 00:19:57.608 ] 00:19:57.608 }, 00:19:57.608 { 00:19:57.608 "name": "nvmf_tgt_poll_group_003", 00:19:57.608 "admin_qpairs": 2, 00:19:57.608 "io_qpairs": 26, 00:19:57.608 "current_admin_qpairs": 0, 00:19:57.608 "current_io_qpairs": 0, 00:19:57.608 "pending_bdev_io": 0, 00:19:57.608 "completed_nvme_io": 126, 00:19:57.608 "transports": [ 00:19:57.608 { 00:19:57.608 "trtype": "RDMA", 00:19:57.608 "pending_data_buffer": 0, 00:19:57.608 "devices": [ 00:19:57.608 { 00:19:57.608 "name": "mlx5_0", 00:19:57.608 "polls": 2602482, 00:19:57.608 "idle_polls": 2602170, 00:19:57.608 "completions": 356, 00:19:57.608 "requests": 178, 00:19:57.608 "request_latency": 44687674, 00:19:57.608 "pending_free_request": 0, 00:19:57.608 "pending_rdma_read": 0, 00:19:57.608 "pending_rdma_write": 0, 00:19:57.608 "pending_rdma_send": 0, 00:19:57.608 "total_send_wrs": 302, 00:19:57.608 "send_doorbell_updates": 155, 00:19:57.608 "total_recv_wrs": 4274, 00:19:57.608 "recv_doorbell_updates": 156 00:19:57.608 }, 00:19:57.608 { 00:19:57.608 "name": "mlx5_1", 00:19:57.608 "polls": 2602482, 00:19:57.608 "idle_polls": 2602482, 00:19:57.608 "completions": 0, 00:19:57.608 "requests": 0, 00:19:57.608 "request_latency": 0, 00:19:57.608 "pending_free_request": 0, 00:19:57.608 "pending_rdma_read": 0, 00:19:57.608 "pending_rdma_write": 0, 00:19:57.608 "pending_rdma_send": 0, 00:19:57.608 "total_send_wrs": 0, 00:19:57.608 "send_doorbell_updates": 0, 00:19:57.608 "total_recv_wrs": 4096, 00:19:57.608 "recv_doorbell_updates": 1 00:19:57.608 } 00:19:57.608 ] 00:19:57.608 } 00:19:57.608 ] 00:19:57.608 } 00:19:57.608 ] 00:19:57.608 }' 00:19:57.608 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:57.608 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:57.608 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:57.608 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1292 > 0 )) 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 163959362 > 0 )) 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:57.867 rmmod nvme_rdma 00:19:57.867 rmmod nvme_fabrics 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3342555 ']' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3342555 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3342555 ']' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3342555 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.867 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3342555 00:19:58.127 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.127 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.127 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3342555' 00:19:58.127 killing process with pid 3342555 00:19:58.127 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3342555 00:19:58.127 05:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3342555 00:20:00.030 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.030 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:00.030 00:20:00.030 real 0m41.264s 00:20:00.030 user 2m9.584s 00:20:00.030 sys 0m8.459s 00:20:00.030 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.030 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:00.030 ************************************ 00:20:00.030 END TEST nvmf_rpc 00:20:00.030 ************************************ 00:20:00.030 05:36:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:20:00.030 05:36:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:00.030 05:36:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.030 05:36:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:00.031 ************************************ 00:20:00.031 START TEST nvmf_invalid 00:20:00.031 ************************************ 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:20:00.031 * Looking for test storage... 00:20:00.031 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:00.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.031 --rc genhtml_branch_coverage=1 00:20:00.031 --rc genhtml_function_coverage=1 00:20:00.031 --rc genhtml_legend=1 00:20:00.031 --rc geninfo_all_blocks=1 00:20:00.031 --rc geninfo_unexecuted_blocks=1 00:20:00.031 00:20:00.031 ' 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:00.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.031 --rc genhtml_branch_coverage=1 00:20:00.031 --rc genhtml_function_coverage=1 00:20:00.031 --rc genhtml_legend=1 00:20:00.031 --rc geninfo_all_blocks=1 00:20:00.031 --rc geninfo_unexecuted_blocks=1 00:20:00.031 00:20:00.031 ' 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:00.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.031 --rc genhtml_branch_coverage=1 00:20:00.031 --rc genhtml_function_coverage=1 00:20:00.031 --rc genhtml_legend=1 00:20:00.031 --rc geninfo_all_blocks=1 00:20:00.031 --rc geninfo_unexecuted_blocks=1 00:20:00.031 00:20:00.031 ' 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:00.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.031 --rc genhtml_branch_coverage=1 00:20:00.031 --rc genhtml_function_coverage=1 00:20:00.031 --rc genhtml_legend=1 00:20:00.031 --rc geninfo_all_blocks=1 00:20:00.031 --rc geninfo_unexecuted_blocks=1 00:20:00.031 00:20:00.031 ' 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.031 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:00.031 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.032 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.291 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.291 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.291 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.291 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:00.291 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:00.291 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:00.291 05:36:56 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:08.409 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:08.409 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:08.409 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.409 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:08.410 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:08.410 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:20:08.670 05:37:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:08.670 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:08.670 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:08.670 altname enp217s0f0np0 00:20:08.670 altname ens818f0np0 00:20:08.670 inet 192.168.100.8/24 scope global mlx_0_0 00:20:08.670 valid_lft forever preferred_lft forever 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:08.670 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:08.670 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:08.670 altname enp217s0f1np1 00:20:08.670 altname ens818f1np1 00:20:08.670 inet 192.168.100.9/24 scope global mlx_0_1 00:20:08.670 valid_lft forever preferred_lft forever 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:08.670 192.168.100.9' 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:08.670 192.168.100.9' 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:08.670 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:08.670 192.168.100.9' 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3352227 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3352227 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3352227 ']' 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.671 05:37:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:08.930 [2024-11-27 05:37:05.263172] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:08.930 [2024-11-27 05:37:05.263276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.930 [2024-11-27 05:37:05.422284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.188 [2024-11-27 05:37:05.525887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.188 [2024-11-27 05:37:05.525937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.188 [2024-11-27 05:37:05.525950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.188 [2024-11-27 05:37:05.525963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.188 [2024-11-27 05:37:05.525973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.188 [2024-11-27 05:37:05.528645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.188 [2024-11-27 05:37:05.528701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.188 [2024-11-27 05:37:05.528713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.188 [2024-11-27 05:37:05.528717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19116 00:20:09.757 [2024-11-27 05:37:06.293565] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:20:09.757 { 00:20:09.757 "nqn": "nqn.2016-06.io.spdk:cnode19116", 00:20:09.757 "tgt_name": "foobar", 00:20:09.757 "method": "nvmf_create_subsystem", 00:20:09.757 "req_id": 1 00:20:09.757 } 00:20:09.757 Got JSON-RPC error response 00:20:09.757 response: 00:20:09.757 { 00:20:09.757 "code": -32603, 00:20:09.757 "message": "Unable to find target foobar" 00:20:09.757 }' 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:20:09.757 { 00:20:09.757 "nqn": "nqn.2016-06.io.spdk:cnode19116", 00:20:09.757 "tgt_name": "foobar", 00:20:09.757 "method": "nvmf_create_subsystem", 00:20:09.757 "req_id": 1 00:20:09.757 } 00:20:09.757 Got JSON-RPC error response 00:20:09.757 response: 00:20:09.757 { 00:20:09.757 "code": -32603, 00:20:09.757 "message": "Unable to find target foobar" 00:20:09.757 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:20:09.757 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14254 00:20:10.017 [2024-11-27 05:37:06.510333] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14254: invalid serial number 'SPDKISFASTANDAWESOME' 00:20:10.017 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:20:10.017 { 00:20:10.017 "nqn": "nqn.2016-06.io.spdk:cnode14254", 00:20:10.017 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:10.017 "method": "nvmf_create_subsystem", 00:20:10.017 "req_id": 1 00:20:10.017 } 00:20:10.017 Got JSON-RPC error response 00:20:10.017 response: 00:20:10.017 { 00:20:10.017 "code": -32602, 00:20:10.017 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:10.017 }' 00:20:10.017 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:20:10.017 { 00:20:10.017 "nqn": "nqn.2016-06.io.spdk:cnode14254", 00:20:10.017 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:10.017 "method": "nvmf_create_subsystem", 00:20:10.017 "req_id": 1 00:20:10.017 } 00:20:10.017 Got JSON-RPC error response 00:20:10.017 response: 00:20:10.017 { 00:20:10.017 "code": -32602, 00:20:10.017 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:10.017 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:10.017 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:20:10.017 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11398 00:20:10.277 [2024-11-27 05:37:06.719070] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11398: invalid model number 'SPDK_Controller' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:20:10.277 { 00:20:10.277 "nqn": "nqn.2016-06.io.spdk:cnode11398", 00:20:10.277 "model_number": "SPDK_Controller\u001f", 00:20:10.277 "method": "nvmf_create_subsystem", 00:20:10.277 "req_id": 1 00:20:10.277 } 00:20:10.277 Got JSON-RPC error response 00:20:10.277 response: 00:20:10.277 { 00:20:10.277 "code": -32602, 00:20:10.277 "message": "Invalid MN SPDK_Controller\u001f" 00:20:10.277 }' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:20:10.277 { 00:20:10.277 "nqn": "nqn.2016-06.io.spdk:cnode11398", 00:20:10.277 "model_number": "SPDK_Controller\u001f", 00:20:10.277 "method": "nvmf_create_subsystem", 00:20:10.277 "req_id": 1 00:20:10.277 } 00:20:10.277 Got JSON-RPC error response 00:20:10.277 response: 00:20:10.277 { 00:20:10.277 "code": -32602, 00:20:10.277 "message": "Invalid MN SPDK_Controller\u001f" 00:20:10.277 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:20:10.277 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:20:10.278 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:20:10.278 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.278 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.278 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ','\''(cR]KA;PFFkSI&Hr('\''' 00:20:10.537 05:37:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ','\''(cR]KA;PFFkSI&Hr('\''' nqn.2016-06.io.spdk:cnode8262 00:20:10.537 [2024-11-27 05:37:07.096315] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8262: invalid serial number ','(cR]KA;PFFkSI&Hr('' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:20:10.797 { 00:20:10.797 "nqn": "nqn.2016-06.io.spdk:cnode8262", 00:20:10.797 "serial_number": ",'\''(cR]KA;PFFkSI&Hr(\u007f'\''", 00:20:10.797 "method": "nvmf_create_subsystem", 00:20:10.797 "req_id": 1 00:20:10.797 } 00:20:10.797 Got JSON-RPC error response 00:20:10.797 response: 00:20:10.797 { 00:20:10.797 "code": -32602, 00:20:10.797 "message": "Invalid SN ,'\''(cR]KA;PFFkSI&Hr(\u007f'\''" 00:20:10.797 }' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:20:10.797 { 00:20:10.797 "nqn": "nqn.2016-06.io.spdk:cnode8262", 00:20:10.797 "serial_number": ",'(cR]KA;PFFkSI&Hr(\u007f'", 00:20:10.797 "method": "nvmf_create_subsystem", 00:20:10.797 "req_id": 1 00:20:10.797 } 00:20:10.797 Got JSON-RPC error response 00:20:10.797 response: 00:20:10.797 { 00:20:10.797 "code": -32602, 00:20:10.797 "message": "Invalid SN ,'(cR]KA;PFFkSI&Hr(\u007f'" 00:20:10.797 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:20:10.797 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:20:10.798 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:11.057 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ K == \- ]] 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'K2t0rw6.9R`Qmiy$QAPX$_~]KgK=T>}rv:3sO&G+' 00:20:11.058 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'K2t0rw6.9R`Qmiy$QAPX$_~]KgK=T>}rv:3sO&G+' nqn.2016-06.io.spdk:cnode17206 00:20:11.058 [2024-11-27 05:37:07.622247] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17206: invalid model number 'K2t0rw6.9R`Qmiy$QAPX$_~]KgK=T>}rv:3sO&G+' 00:20:11.317 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:20:11.317 { 00:20:11.317 "nqn": "nqn.2016-06.io.spdk:cnode17206", 00:20:11.317 "model_number": "K2t0rw6.9R`Qm\u007fiy$QAPX$_~]KgK=T>}rv:3sO&G+", 00:20:11.317 "method": "nvmf_create_subsystem", 00:20:11.317 "req_id": 1 00:20:11.317 } 00:20:11.317 Got JSON-RPC error response 00:20:11.317 response: 00:20:11.317 { 00:20:11.317 "code": -32602, 00:20:11.317 "message": "Invalid MN K2t0rw6.9R`Qm\u007fiy$QAPX$_~]KgK=T>}rv:3sO&G+" 00:20:11.317 }' 00:20:11.317 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:20:11.317 { 00:20:11.317 "nqn": "nqn.2016-06.io.spdk:cnode17206", 00:20:11.317 "model_number": "K2t0rw6.9R`Qm\u007fiy$QAPX$_~]KgK=T>}rv:3sO&G+", 00:20:11.317 "method": "nvmf_create_subsystem", 00:20:11.317 "req_id": 1 00:20:11.317 } 00:20:11.317 Got JSON-RPC error response 00:20:11.317 response: 00:20:11.317 { 00:20:11.317 "code": -32602, 00:20:11.317 "message": "Invalid MN K2t0rw6.9R`Qm\u007fiy$QAPX$_~]KgK=T>}rv:3sO&G+" 00:20:11.317 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:11.317 05:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:20:11.317 [2024-11-27 05:37:07.868697] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fb90afa6940) succeed. 00:20:11.317 [2024-11-27 05:37:07.878356] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fb90af62940) succeed. 00:20:11.577 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:20:11.836 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:20:11.836 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:20:11.836 192.168.100.9' 00:20:11.836 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:20:11.836 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:20:11.836 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:20:12.094 [2024-11-27 05:37:08.535571] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:20:12.094 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:20:12.094 { 00:20:12.094 "nqn": "nqn.2016-06.io.spdk:cnode", 00:20:12.094 "listen_address": { 00:20:12.094 "trtype": "rdma", 00:20:12.094 "traddr": "192.168.100.8", 00:20:12.094 "trsvcid": "4421" 00:20:12.094 }, 00:20:12.094 "method": "nvmf_subsystem_remove_listener", 00:20:12.094 "req_id": 1 00:20:12.094 } 00:20:12.094 Got JSON-RPC error response 00:20:12.094 response: 00:20:12.094 { 00:20:12.094 "code": -32602, 00:20:12.094 "message": "Invalid parameters" 00:20:12.094 }' 00:20:12.094 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:20:12.094 { 00:20:12.094 "nqn": "nqn.2016-06.io.spdk:cnode", 00:20:12.094 "listen_address": { 00:20:12.094 "trtype": "rdma", 00:20:12.094 "traddr": "192.168.100.8", 00:20:12.094 "trsvcid": "4421" 00:20:12.094 }, 00:20:12.094 "method": "nvmf_subsystem_remove_listener", 00:20:12.094 "req_id": 1 00:20:12.094 } 00:20:12.094 Got JSON-RPC error response 00:20:12.094 response: 00:20:12.094 { 00:20:12.094 "code": -32602, 00:20:12.094 "message": "Invalid parameters" 00:20:12.094 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:20:12.094 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18368 -i 0 00:20:12.353 [2024-11-27 05:37:08.736272] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18368: invalid cntlid range [0-65519] 00:20:12.353 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:20:12.353 { 00:20:12.353 "nqn": "nqn.2016-06.io.spdk:cnode18368", 00:20:12.353 "min_cntlid": 0, 00:20:12.353 "method": "nvmf_create_subsystem", 00:20:12.353 "req_id": 1 00:20:12.353 } 00:20:12.353 Got JSON-RPC error response 00:20:12.353 response: 00:20:12.353 { 00:20:12.353 "code": -32602, 00:20:12.353 "message": "Invalid cntlid range [0-65519]" 00:20:12.353 }' 00:20:12.353 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:20:12.353 { 00:20:12.353 "nqn": "nqn.2016-06.io.spdk:cnode18368", 00:20:12.353 "min_cntlid": 0, 00:20:12.353 "method": "nvmf_create_subsystem", 00:20:12.353 "req_id": 1 00:20:12.353 } 00:20:12.353 Got JSON-RPC error response 00:20:12.353 response: 00:20:12.353 { 00:20:12.353 "code": -32602, 00:20:12.353 "message": "Invalid cntlid range [0-65519]" 00:20:12.353 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:12.353 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23245 -i 65520 00:20:12.353 [2024-11-27 05:37:08.933005] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23245: invalid cntlid range [65520-65519] 00:20:12.612 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:20:12.612 { 00:20:12.612 "nqn": "nqn.2016-06.io.spdk:cnode23245", 00:20:12.612 "min_cntlid": 65520, 00:20:12.612 "method": "nvmf_create_subsystem", 00:20:12.612 "req_id": 1 00:20:12.612 } 00:20:12.612 Got JSON-RPC error response 00:20:12.612 response: 00:20:12.612 { 00:20:12.612 "code": -32602, 00:20:12.612 "message": "Invalid cntlid range [65520-65519]" 00:20:12.612 }' 00:20:12.612 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:20:12.612 { 00:20:12.612 "nqn": "nqn.2016-06.io.spdk:cnode23245", 00:20:12.612 "min_cntlid": 65520, 00:20:12.612 "method": "nvmf_create_subsystem", 00:20:12.612 "req_id": 1 00:20:12.612 } 00:20:12.612 Got JSON-RPC error response 00:20:12.612 response: 00:20:12.612 { 00:20:12.612 "code": -32602, 00:20:12.612 "message": "Invalid cntlid range [65520-65519]" 00:20:12.612 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:12.612 05:37:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19107 -I 0 00:20:12.612 [2024-11-27 05:37:09.129738] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19107: invalid cntlid range [1-0] 00:20:12.612 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:20:12.612 { 00:20:12.612 "nqn": "nqn.2016-06.io.spdk:cnode19107", 00:20:12.612 "max_cntlid": 0, 00:20:12.612 "method": "nvmf_create_subsystem", 00:20:12.612 "req_id": 1 00:20:12.612 } 00:20:12.612 Got JSON-RPC error response 00:20:12.612 response: 00:20:12.612 { 00:20:12.612 "code": -32602, 00:20:12.612 "message": "Invalid cntlid range [1-0]" 00:20:12.612 }' 00:20:12.612 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:20:12.612 { 00:20:12.612 "nqn": "nqn.2016-06.io.spdk:cnode19107", 00:20:12.612 "max_cntlid": 0, 00:20:12.612 "method": "nvmf_create_subsystem", 00:20:12.612 "req_id": 1 00:20:12.612 } 00:20:12.612 Got JSON-RPC error response 00:20:12.612 response: 00:20:12.612 { 00:20:12.612 "code": -32602, 00:20:12.612 "message": "Invalid cntlid range [1-0]" 00:20:12.612 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:12.612 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12570 -I 65520 00:20:12.871 [2024-11-27 05:37:09.342482] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12570: invalid cntlid range [1-65520] 00:20:12.871 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:20:12.871 { 00:20:12.871 "nqn": "nqn.2016-06.io.spdk:cnode12570", 00:20:12.871 "max_cntlid": 65520, 00:20:12.871 "method": "nvmf_create_subsystem", 00:20:12.871 "req_id": 1 00:20:12.871 } 00:20:12.871 Got JSON-RPC error response 00:20:12.871 response: 00:20:12.871 { 00:20:12.871 "code": -32602, 00:20:12.871 "message": "Invalid cntlid range [1-65520]" 00:20:12.871 }' 00:20:12.871 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:20:12.871 { 00:20:12.871 "nqn": "nqn.2016-06.io.spdk:cnode12570", 00:20:12.871 "max_cntlid": 65520, 00:20:12.871 "method": "nvmf_create_subsystem", 00:20:12.871 "req_id": 1 00:20:12.871 } 00:20:12.871 Got JSON-RPC error response 00:20:12.871 response: 00:20:12.871 { 00:20:12.871 "code": -32602, 00:20:12.871 "message": "Invalid cntlid range [1-65520]" 00:20:12.871 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:12.871 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27367 -i 6 -I 5 00:20:13.130 [2024-11-27 05:37:09.555319] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27367: invalid cntlid range [6-5] 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:20:13.130 { 00:20:13.130 "nqn": "nqn.2016-06.io.spdk:cnode27367", 00:20:13.130 "min_cntlid": 6, 00:20:13.130 "max_cntlid": 5, 00:20:13.130 "method": "nvmf_create_subsystem", 00:20:13.130 "req_id": 1 00:20:13.130 } 00:20:13.130 Got JSON-RPC error response 00:20:13.130 response: 00:20:13.130 { 00:20:13.130 "code": -32602, 00:20:13.130 "message": "Invalid cntlid range [6-5]" 00:20:13.130 }' 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:20:13.130 { 00:20:13.130 "nqn": "nqn.2016-06.io.spdk:cnode27367", 00:20:13.130 "min_cntlid": 6, 00:20:13.130 "max_cntlid": 5, 00:20:13.130 "method": "nvmf_create_subsystem", 00:20:13.130 "req_id": 1 00:20:13.130 } 00:20:13.130 Got JSON-RPC error response 00:20:13.130 response: 00:20:13.130 { 00:20:13.130 "code": -32602, 00:20:13.130 "message": "Invalid cntlid range [6-5]" 00:20:13.130 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:20:13.130 { 00:20:13.130 "name": "foobar", 00:20:13.130 "method": "nvmf_delete_target", 00:20:13.130 "req_id": 1 00:20:13.130 } 00:20:13.130 Got JSON-RPC error response 00:20:13.130 response: 00:20:13.130 { 00:20:13.130 "code": -32602, 00:20:13.130 "message": "The specified target doesn'\''t exist, cannot delete it." 00:20:13.130 }' 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:20:13.130 { 00:20:13.130 "name": "foobar", 00:20:13.130 "method": "nvmf_delete_target", 00:20:13.130 "req_id": 1 00:20:13.130 } 00:20:13.130 Got JSON-RPC error response 00:20:13.130 response: 00:20:13.130 { 00:20:13.130 "code": -32602, 00:20:13.130 "message": "The specified target doesn't exist, cannot delete it." 00:20:13.130 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:13.130 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:13.130 rmmod nvme_rdma 00:20:13.389 rmmod nvme_fabrics 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3352227 ']' 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3352227 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3352227 ']' 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3352227 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3352227 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3352227' 00:20:13.389 killing process with pid 3352227 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3352227 00:20:13.389 05:37:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3352227 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:15.295 00:20:15.295 real 0m15.069s 00:20:15.295 user 0m27.362s 00:20:15.295 sys 0m7.860s 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:15.295 ************************************ 00:20:15.295 END TEST nvmf_invalid 00:20:15.295 ************************************ 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:15.295 ************************************ 00:20:15.295 START TEST nvmf_connect_stress 00:20:15.295 ************************************ 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:20:15.295 * Looking for test storage... 00:20:15.295 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.295 --rc genhtml_branch_coverage=1 00:20:15.295 --rc genhtml_function_coverage=1 00:20:15.295 --rc genhtml_legend=1 00:20:15.295 --rc geninfo_all_blocks=1 00:20:15.295 --rc geninfo_unexecuted_blocks=1 00:20:15.295 00:20:15.295 ' 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.295 --rc genhtml_branch_coverage=1 00:20:15.295 --rc genhtml_function_coverage=1 00:20:15.295 --rc genhtml_legend=1 00:20:15.295 --rc geninfo_all_blocks=1 00:20:15.295 --rc geninfo_unexecuted_blocks=1 00:20:15.295 00:20:15.295 ' 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.295 --rc genhtml_branch_coverage=1 00:20:15.295 --rc genhtml_function_coverage=1 00:20:15.295 --rc genhtml_legend=1 00:20:15.295 --rc geninfo_all_blocks=1 00:20:15.295 --rc geninfo_unexecuted_blocks=1 00:20:15.295 00:20:15.295 ' 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.295 --rc genhtml_branch_coverage=1 00:20:15.295 --rc genhtml_function_coverage=1 00:20:15.295 --rc genhtml_legend=1 00:20:15.295 --rc geninfo_all_blocks=1 00:20:15.295 --rc geninfo_unexecuted_blocks=1 00:20:15.295 00:20:15.295 ' 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.295 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:15.296 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:20:15.296 05:37:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.417 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:23.418 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:23.418 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:23.418 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:23.418 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:23.418 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:23.418 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:23.418 altname enp217s0f0np0 00:20:23.418 altname ens818f0np0 00:20:23.418 inet 192.168.100.8/24 scope global mlx_0_0 00:20:23.418 valid_lft forever preferred_lft forever 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:23.418 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:23.418 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:23.418 altname enp217s0f1np1 00:20:23.418 altname ens818f1np1 00:20:23.418 inet 192.168.100.9/24 scope global mlx_0_1 00:20:23.418 valid_lft forever preferred_lft forever 00:20:23.418 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:23.419 05:37:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:23.678 192.168.100.9' 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:23.678 192.168.100.9' 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:23.678 192.168.100.9' 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3357382 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3357382 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3357382 ']' 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.678 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:23.678 [2024-11-27 05:37:20.163008] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:23.678 [2024-11-27 05:37:20.163116] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.937 [2024-11-27 05:37:20.318853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:23.937 [2024-11-27 05:37:20.420979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.937 [2024-11-27 05:37:20.421031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.937 [2024-11-27 05:37:20.421047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.937 [2024-11-27 05:37:20.421060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.937 [2024-11-27 05:37:20.421070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.937 [2024-11-27 05:37:20.423431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.937 [2024-11-27 05:37:20.423493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.937 [2024-11-27 05:37:20.423500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.506 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.506 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:20:24.506 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.506 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:24.506 05:37:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.506 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.506 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:24.506 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.506 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.506 [2024-11-27 05:37:21.042856] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f6ea4b5a940) succeed. 00:20:24.506 [2024-11-27 05:37:21.052012] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f6ea4b13940) succeed. 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.776 [2024-11-27 05:37:21.265087] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.776 NULL1 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3357665 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.776 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:24.777 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:25.036 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:25.036 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:25.036 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:25.036 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:25.036 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:25.036 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.036 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.036 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:25.295 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.295 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:25.295 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.295 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.295 05:37:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:25.553 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.553 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:25.553 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.553 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.553 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:26.122 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.122 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:26.122 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:26.122 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.122 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:26.381 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.381 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:26.381 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:26.381 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.381 05:37:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:26.640 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.640 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:26.640 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:26.640 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.640 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:27.208 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.208 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:27.208 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.208 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.208 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:27.467 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.467 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:27.467 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.467 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.467 05:37:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:27.726 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.726 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:27.726 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.726 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.726 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:28.295 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.295 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:28.295 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.295 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.295 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:28.553 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.553 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:28.553 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.553 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.553 05:37:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:28.812 05:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.812 05:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:28.812 05:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.812 05:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.812 05:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:29.379 05:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.379 05:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:29.379 05:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.379 05:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.379 05:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:29.638 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.638 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:29.638 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.638 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.638 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:29.896 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.896 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:29.896 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.896 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.896 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:30.463 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.463 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:30.463 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:30.463 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.463 05:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:30.722 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.722 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:30.722 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:30.722 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.722 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:30.981 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.981 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:30.982 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:30.982 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.982 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:31.547 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.547 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:31.547 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:31.547 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.547 05:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:31.806 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.806 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:31.806 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:31.806 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.806 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:32.064 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.064 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:32.064 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:32.064 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.064 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:32.674 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.674 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:32.674 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:32.674 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.674 05:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:32.932 05:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.932 05:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:32.932 05:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:32.932 05:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.932 05:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:33.190 05:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.190 05:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:33.190 05:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:33.190 05:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.190 05:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:33.757 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.757 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:33.757 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:33.757 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.757 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:34.015 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.015 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:34.015 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:34.015 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.015 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:34.273 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.273 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:34.273 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:34.273 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.273 05:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:34.842 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.842 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:34.842 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:34.842 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.842 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:35.101 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.101 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:35.101 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:35.101 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.101 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:35.101 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3357665 00:20:35.360 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3357665) - No such process 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3357665 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:35.360 rmmod nvme_rdma 00:20:35.360 rmmod nvme_fabrics 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3357382 ']' 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3357382 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3357382 ']' 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3357382 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.360 05:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3357382 00:20:35.619 05:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:35.619 05:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:35.619 05:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3357382' 00:20:35.619 killing process with pid 3357382 00:20:35.619 05:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3357382 00:20:35.620 05:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3357382 00:20:36.996 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.996 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:36.996 00:20:36.996 real 0m22.000s 00:20:36.996 user 0m44.919s 00:20:36.996 sys 0m10.743s 00:20:36.996 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.996 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:36.996 ************************************ 00:20:36.996 END TEST nvmf_connect_stress 00:20:36.996 ************************************ 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:37.255 ************************************ 00:20:37.255 START TEST nvmf_fused_ordering 00:20:37.255 ************************************ 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:20:37.255 * Looking for test storage... 00:20:37.255 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:20:37.255 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:37.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.256 --rc genhtml_branch_coverage=1 00:20:37.256 --rc genhtml_function_coverage=1 00:20:37.256 --rc genhtml_legend=1 00:20:37.256 --rc geninfo_all_blocks=1 00:20:37.256 --rc geninfo_unexecuted_blocks=1 00:20:37.256 00:20:37.256 ' 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:37.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.256 --rc genhtml_branch_coverage=1 00:20:37.256 --rc genhtml_function_coverage=1 00:20:37.256 --rc genhtml_legend=1 00:20:37.256 --rc geninfo_all_blocks=1 00:20:37.256 --rc geninfo_unexecuted_blocks=1 00:20:37.256 00:20:37.256 ' 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:37.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.256 --rc genhtml_branch_coverage=1 00:20:37.256 --rc genhtml_function_coverage=1 00:20:37.256 --rc genhtml_legend=1 00:20:37.256 --rc geninfo_all_blocks=1 00:20:37.256 --rc geninfo_unexecuted_blocks=1 00:20:37.256 00:20:37.256 ' 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:37.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.256 --rc genhtml_branch_coverage=1 00:20:37.256 --rc genhtml_function_coverage=1 00:20:37.256 --rc genhtml_legend=1 00:20:37.256 --rc geninfo_all_blocks=1 00:20:37.256 --rc geninfo_unexecuted_blocks=1 00:20:37.256 00:20:37.256 ' 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:37.256 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:37.516 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:20:37.516 05:37:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:47.498 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:47.498 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:47.498 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:47.499 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:47.499 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:47.499 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:47.499 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:47.499 altname enp217s0f0np0 00:20:47.499 altname ens818f0np0 00:20:47.499 inet 192.168.100.8/24 scope global mlx_0_0 00:20:47.499 valid_lft forever preferred_lft forever 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:47.499 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:47.499 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:47.499 altname enp217s0f1np1 00:20:47.499 altname ens818f1np1 00:20:47.499 inet 192.168.100.9/24 scope global mlx_0_1 00:20:47.499 valid_lft forever preferred_lft forever 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:47.499 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:47.500 192.168.100.9' 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:47.500 192.168.100.9' 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:47.500 192.168.100.9' 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3363745 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3363745 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3363745 ']' 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.500 05:37:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.500 [2024-11-27 05:37:42.732225] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:47.500 [2024-11-27 05:37:42.732330] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.500 [2024-11-27 05:37:42.887442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.500 [2024-11-27 05:37:42.986577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.500 [2024-11-27 05:37:42.986632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.500 [2024-11-27 05:37:42.986646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.500 [2024-11-27 05:37:42.986660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.500 [2024-11-27 05:37:42.986670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.500 [2024-11-27 05:37:42.988083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.500 [2024-11-27 05:37:43.596584] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7fec7e9a6940) succeed. 00:20:47.500 [2024-11-27 05:37:43.605753] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7fec7e962940) succeed. 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.500 [2024-11-27 05:37:43.701101] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.500 NULL1 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.500 05:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:47.500 [2024-11-27 05:37:43.784419] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:47.500 [2024-11-27 05:37:43.784482] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3364021 ] 00:20:47.500 Attached to nqn.2016-06.io.spdk:cnode1 00:20:47.500 Namespace ID: 1 size: 1GB 00:20:47.500 fused_ordering(0) 00:20:47.500 fused_ordering(1) 00:20:47.500 fused_ordering(2) 00:20:47.501 fused_ordering(3) 00:20:47.501 fused_ordering(4) 00:20:47.501 fused_ordering(5) 00:20:47.501 fused_ordering(6) 00:20:47.501 fused_ordering(7) 00:20:47.501 fused_ordering(8) 00:20:47.501 fused_ordering(9) 00:20:47.501 fused_ordering(10) 00:20:47.501 fused_ordering(11) 00:20:47.501 fused_ordering(12) 00:20:47.501 fused_ordering(13) 00:20:47.501 fused_ordering(14) 00:20:47.501 fused_ordering(15) 00:20:47.501 fused_ordering(16) 00:20:47.501 fused_ordering(17) 00:20:47.501 fused_ordering(18) 00:20:47.501 fused_ordering(19) 00:20:47.501 fused_ordering(20) 00:20:47.501 fused_ordering(21) 00:20:47.501 fused_ordering(22) 00:20:47.501 fused_ordering(23) 00:20:47.501 fused_ordering(24) 00:20:47.501 fused_ordering(25) 00:20:47.501 fused_ordering(26) 00:20:47.501 fused_ordering(27) 00:20:47.501 fused_ordering(28) 00:20:47.501 fused_ordering(29) 00:20:47.501 fused_ordering(30) 00:20:47.501 fused_ordering(31) 00:20:47.501 fused_ordering(32) 00:20:47.501 fused_ordering(33) 00:20:47.501 fused_ordering(34) 00:20:47.501 fused_ordering(35) 00:20:47.501 fused_ordering(36) 00:20:47.501 fused_ordering(37) 00:20:47.501 fused_ordering(38) 00:20:47.501 fused_ordering(39) 00:20:47.501 fused_ordering(40) 00:20:47.501 fused_ordering(41) 00:20:47.501 fused_ordering(42) 00:20:47.501 fused_ordering(43) 00:20:47.501 fused_ordering(44) 00:20:47.501 fused_ordering(45) 00:20:47.501 fused_ordering(46) 00:20:47.501 fused_ordering(47) 00:20:47.501 fused_ordering(48) 00:20:47.501 fused_ordering(49) 00:20:47.501 fused_ordering(50) 00:20:47.501 fused_ordering(51) 00:20:47.501 fused_ordering(52) 00:20:47.501 fused_ordering(53) 00:20:47.501 fused_ordering(54) 00:20:47.501 fused_ordering(55) 00:20:47.501 fused_ordering(56) 00:20:47.501 fused_ordering(57) 00:20:47.501 fused_ordering(58) 00:20:47.501 fused_ordering(59) 00:20:47.501 fused_ordering(60) 00:20:47.501 fused_ordering(61) 00:20:47.501 fused_ordering(62) 00:20:47.501 fused_ordering(63) 00:20:47.501 fused_ordering(64) 00:20:47.501 fused_ordering(65) 00:20:47.501 fused_ordering(66) 00:20:47.501 fused_ordering(67) 00:20:47.501 fused_ordering(68) 00:20:47.501 fused_ordering(69) 00:20:47.501 fused_ordering(70) 00:20:47.501 fused_ordering(71) 00:20:47.501 fused_ordering(72) 00:20:47.501 fused_ordering(73) 00:20:47.501 fused_ordering(74) 00:20:47.501 fused_ordering(75) 00:20:47.501 fused_ordering(76) 00:20:47.501 fused_ordering(77) 00:20:47.501 fused_ordering(78) 00:20:47.501 fused_ordering(79) 00:20:47.501 fused_ordering(80) 00:20:47.501 fused_ordering(81) 00:20:47.501 fused_ordering(82) 00:20:47.501 fused_ordering(83) 00:20:47.501 fused_ordering(84) 00:20:47.501 fused_ordering(85) 00:20:47.501 fused_ordering(86) 00:20:47.501 fused_ordering(87) 00:20:47.501 fused_ordering(88) 00:20:47.501 fused_ordering(89) 00:20:47.501 fused_ordering(90) 00:20:47.501 fused_ordering(91) 00:20:47.501 fused_ordering(92) 00:20:47.501 fused_ordering(93) 00:20:47.501 fused_ordering(94) 00:20:47.501 fused_ordering(95) 00:20:47.501 fused_ordering(96) 00:20:47.501 fused_ordering(97) 00:20:47.501 fused_ordering(98) 00:20:47.501 fused_ordering(99) 00:20:47.501 fused_ordering(100) 00:20:47.501 fused_ordering(101) 00:20:47.501 fused_ordering(102) 00:20:47.501 fused_ordering(103) 00:20:47.501 fused_ordering(104) 00:20:47.501 fused_ordering(105) 00:20:47.501 fused_ordering(106) 00:20:47.501 fused_ordering(107) 00:20:47.501 fused_ordering(108) 00:20:47.501 fused_ordering(109) 00:20:47.501 fused_ordering(110) 00:20:47.501 fused_ordering(111) 00:20:47.501 fused_ordering(112) 00:20:47.501 fused_ordering(113) 00:20:47.501 fused_ordering(114) 00:20:47.501 fused_ordering(115) 00:20:47.501 fused_ordering(116) 00:20:47.501 fused_ordering(117) 00:20:47.501 fused_ordering(118) 00:20:47.501 fused_ordering(119) 00:20:47.501 fused_ordering(120) 00:20:47.501 fused_ordering(121) 00:20:47.501 fused_ordering(122) 00:20:47.501 fused_ordering(123) 00:20:47.501 fused_ordering(124) 00:20:47.501 fused_ordering(125) 00:20:47.501 fused_ordering(126) 00:20:47.501 fused_ordering(127) 00:20:47.501 fused_ordering(128) 00:20:47.501 fused_ordering(129) 00:20:47.501 fused_ordering(130) 00:20:47.501 fused_ordering(131) 00:20:47.501 fused_ordering(132) 00:20:47.501 fused_ordering(133) 00:20:47.501 fused_ordering(134) 00:20:47.501 fused_ordering(135) 00:20:47.501 fused_ordering(136) 00:20:47.501 fused_ordering(137) 00:20:47.501 fused_ordering(138) 00:20:47.501 fused_ordering(139) 00:20:47.501 fused_ordering(140) 00:20:47.501 fused_ordering(141) 00:20:47.501 fused_ordering(142) 00:20:47.501 fused_ordering(143) 00:20:47.501 fused_ordering(144) 00:20:47.501 fused_ordering(145) 00:20:47.501 fused_ordering(146) 00:20:47.501 fused_ordering(147) 00:20:47.501 fused_ordering(148) 00:20:47.501 fused_ordering(149) 00:20:47.501 fused_ordering(150) 00:20:47.501 fused_ordering(151) 00:20:47.501 fused_ordering(152) 00:20:47.501 fused_ordering(153) 00:20:47.501 fused_ordering(154) 00:20:47.501 fused_ordering(155) 00:20:47.501 fused_ordering(156) 00:20:47.501 fused_ordering(157) 00:20:47.501 fused_ordering(158) 00:20:47.501 fused_ordering(159) 00:20:47.501 fused_ordering(160) 00:20:47.501 fused_ordering(161) 00:20:47.501 fused_ordering(162) 00:20:47.501 fused_ordering(163) 00:20:47.501 fused_ordering(164) 00:20:47.501 fused_ordering(165) 00:20:47.501 fused_ordering(166) 00:20:47.501 fused_ordering(167) 00:20:47.501 fused_ordering(168) 00:20:47.501 fused_ordering(169) 00:20:47.501 fused_ordering(170) 00:20:47.501 fused_ordering(171) 00:20:47.501 fused_ordering(172) 00:20:47.501 fused_ordering(173) 00:20:47.501 fused_ordering(174) 00:20:47.501 fused_ordering(175) 00:20:47.501 fused_ordering(176) 00:20:47.501 fused_ordering(177) 00:20:47.501 fused_ordering(178) 00:20:47.501 fused_ordering(179) 00:20:47.501 fused_ordering(180) 00:20:47.501 fused_ordering(181) 00:20:47.501 fused_ordering(182) 00:20:47.501 fused_ordering(183) 00:20:47.501 fused_ordering(184) 00:20:47.501 fused_ordering(185) 00:20:47.501 fused_ordering(186) 00:20:47.501 fused_ordering(187) 00:20:47.501 fused_ordering(188) 00:20:47.501 fused_ordering(189) 00:20:47.501 fused_ordering(190) 00:20:47.501 fused_ordering(191) 00:20:47.501 fused_ordering(192) 00:20:47.501 fused_ordering(193) 00:20:47.501 fused_ordering(194) 00:20:47.501 fused_ordering(195) 00:20:47.501 fused_ordering(196) 00:20:47.501 fused_ordering(197) 00:20:47.502 fused_ordering(198) 00:20:47.502 fused_ordering(199) 00:20:47.502 fused_ordering(200) 00:20:47.502 fused_ordering(201) 00:20:47.502 fused_ordering(202) 00:20:47.502 fused_ordering(203) 00:20:47.502 fused_ordering(204) 00:20:47.502 fused_ordering(205) 00:20:47.761 fused_ordering(206) 00:20:47.761 fused_ordering(207) 00:20:47.761 fused_ordering(208) 00:20:47.761 fused_ordering(209) 00:20:47.761 fused_ordering(210) 00:20:47.761 fused_ordering(211) 00:20:47.761 fused_ordering(212) 00:20:47.761 fused_ordering(213) 00:20:47.761 fused_ordering(214) 00:20:47.761 fused_ordering(215) 00:20:47.761 fused_ordering(216) 00:20:47.761 fused_ordering(217) 00:20:47.761 fused_ordering(218) 00:20:47.761 fused_ordering(219) 00:20:47.761 fused_ordering(220) 00:20:47.761 fused_ordering(221) 00:20:47.761 fused_ordering(222) 00:20:47.761 fused_ordering(223) 00:20:47.761 fused_ordering(224) 00:20:47.761 fused_ordering(225) 00:20:47.761 fused_ordering(226) 00:20:47.761 fused_ordering(227) 00:20:47.761 fused_ordering(228) 00:20:47.761 fused_ordering(229) 00:20:47.761 fused_ordering(230) 00:20:47.761 fused_ordering(231) 00:20:47.761 fused_ordering(232) 00:20:47.761 fused_ordering(233) 00:20:47.761 fused_ordering(234) 00:20:47.761 fused_ordering(235) 00:20:47.761 fused_ordering(236) 00:20:47.761 fused_ordering(237) 00:20:47.761 fused_ordering(238) 00:20:47.761 fused_ordering(239) 00:20:47.761 fused_ordering(240) 00:20:47.761 fused_ordering(241) 00:20:47.761 fused_ordering(242) 00:20:47.761 fused_ordering(243) 00:20:47.761 fused_ordering(244) 00:20:47.761 fused_ordering(245) 00:20:47.761 fused_ordering(246) 00:20:47.761 fused_ordering(247) 00:20:47.761 fused_ordering(248) 00:20:47.761 fused_ordering(249) 00:20:47.761 fused_ordering(250) 00:20:47.761 fused_ordering(251) 00:20:47.761 fused_ordering(252) 00:20:47.761 fused_ordering(253) 00:20:47.761 fused_ordering(254) 00:20:47.761 fused_ordering(255) 00:20:47.761 fused_ordering(256) 00:20:47.761 fused_ordering(257) 00:20:47.761 fused_ordering(258) 00:20:47.761 fused_ordering(259) 00:20:47.761 fused_ordering(260) 00:20:47.761 fused_ordering(261) 00:20:47.761 fused_ordering(262) 00:20:47.761 fused_ordering(263) 00:20:47.761 fused_ordering(264) 00:20:47.761 fused_ordering(265) 00:20:47.761 fused_ordering(266) 00:20:47.761 fused_ordering(267) 00:20:47.761 fused_ordering(268) 00:20:47.761 fused_ordering(269) 00:20:47.761 fused_ordering(270) 00:20:47.761 fused_ordering(271) 00:20:47.761 fused_ordering(272) 00:20:47.761 fused_ordering(273) 00:20:47.762 fused_ordering(274) 00:20:47.762 fused_ordering(275) 00:20:47.762 fused_ordering(276) 00:20:47.762 fused_ordering(277) 00:20:47.762 fused_ordering(278) 00:20:47.762 fused_ordering(279) 00:20:47.762 fused_ordering(280) 00:20:47.762 fused_ordering(281) 00:20:47.762 fused_ordering(282) 00:20:47.762 fused_ordering(283) 00:20:47.762 fused_ordering(284) 00:20:47.762 fused_ordering(285) 00:20:47.762 fused_ordering(286) 00:20:47.762 fused_ordering(287) 00:20:47.762 fused_ordering(288) 00:20:47.762 fused_ordering(289) 00:20:47.762 fused_ordering(290) 00:20:47.762 fused_ordering(291) 00:20:47.762 fused_ordering(292) 00:20:47.762 fused_ordering(293) 00:20:47.762 fused_ordering(294) 00:20:47.762 fused_ordering(295) 00:20:47.762 fused_ordering(296) 00:20:47.762 fused_ordering(297) 00:20:47.762 fused_ordering(298) 00:20:47.762 fused_ordering(299) 00:20:47.762 fused_ordering(300) 00:20:47.762 fused_ordering(301) 00:20:47.762 fused_ordering(302) 00:20:47.762 fused_ordering(303) 00:20:47.762 fused_ordering(304) 00:20:47.762 fused_ordering(305) 00:20:47.762 fused_ordering(306) 00:20:47.762 fused_ordering(307) 00:20:47.762 fused_ordering(308) 00:20:47.762 fused_ordering(309) 00:20:47.762 fused_ordering(310) 00:20:47.762 fused_ordering(311) 00:20:47.762 fused_ordering(312) 00:20:47.762 fused_ordering(313) 00:20:47.762 fused_ordering(314) 00:20:47.762 fused_ordering(315) 00:20:47.762 fused_ordering(316) 00:20:47.762 fused_ordering(317) 00:20:47.762 fused_ordering(318) 00:20:47.762 fused_ordering(319) 00:20:47.762 fused_ordering(320) 00:20:47.762 fused_ordering(321) 00:20:47.762 fused_ordering(322) 00:20:47.762 fused_ordering(323) 00:20:47.762 fused_ordering(324) 00:20:47.762 fused_ordering(325) 00:20:47.762 fused_ordering(326) 00:20:47.762 fused_ordering(327) 00:20:47.762 fused_ordering(328) 00:20:47.762 fused_ordering(329) 00:20:47.762 fused_ordering(330) 00:20:47.762 fused_ordering(331) 00:20:47.762 fused_ordering(332) 00:20:47.762 fused_ordering(333) 00:20:47.762 fused_ordering(334) 00:20:47.762 fused_ordering(335) 00:20:47.762 fused_ordering(336) 00:20:47.762 fused_ordering(337) 00:20:47.762 fused_ordering(338) 00:20:47.762 fused_ordering(339) 00:20:47.762 fused_ordering(340) 00:20:47.762 fused_ordering(341) 00:20:47.762 fused_ordering(342) 00:20:47.762 fused_ordering(343) 00:20:47.762 fused_ordering(344) 00:20:47.762 fused_ordering(345) 00:20:47.762 fused_ordering(346) 00:20:47.762 fused_ordering(347) 00:20:47.762 fused_ordering(348) 00:20:47.762 fused_ordering(349) 00:20:47.762 fused_ordering(350) 00:20:47.762 fused_ordering(351) 00:20:47.762 fused_ordering(352) 00:20:47.762 fused_ordering(353) 00:20:47.762 fused_ordering(354) 00:20:47.762 fused_ordering(355) 00:20:47.762 fused_ordering(356) 00:20:47.762 fused_ordering(357) 00:20:47.762 fused_ordering(358) 00:20:47.762 fused_ordering(359) 00:20:47.762 fused_ordering(360) 00:20:47.762 fused_ordering(361) 00:20:47.762 fused_ordering(362) 00:20:47.762 fused_ordering(363) 00:20:47.762 fused_ordering(364) 00:20:47.762 fused_ordering(365) 00:20:47.762 fused_ordering(366) 00:20:47.762 fused_ordering(367) 00:20:47.762 fused_ordering(368) 00:20:47.762 fused_ordering(369) 00:20:47.762 fused_ordering(370) 00:20:47.762 fused_ordering(371) 00:20:47.762 fused_ordering(372) 00:20:47.762 fused_ordering(373) 00:20:47.762 fused_ordering(374) 00:20:47.762 fused_ordering(375) 00:20:47.762 fused_ordering(376) 00:20:47.762 fused_ordering(377) 00:20:47.762 fused_ordering(378) 00:20:47.762 fused_ordering(379) 00:20:47.762 fused_ordering(380) 00:20:47.762 fused_ordering(381) 00:20:47.762 fused_ordering(382) 00:20:47.762 fused_ordering(383) 00:20:47.762 fused_ordering(384) 00:20:47.762 fused_ordering(385) 00:20:47.762 fused_ordering(386) 00:20:47.762 fused_ordering(387) 00:20:47.762 fused_ordering(388) 00:20:47.762 fused_ordering(389) 00:20:47.762 fused_ordering(390) 00:20:47.762 fused_ordering(391) 00:20:47.762 fused_ordering(392) 00:20:47.762 fused_ordering(393) 00:20:47.762 fused_ordering(394) 00:20:47.762 fused_ordering(395) 00:20:47.762 fused_ordering(396) 00:20:47.762 fused_ordering(397) 00:20:47.762 fused_ordering(398) 00:20:47.762 fused_ordering(399) 00:20:47.762 fused_ordering(400) 00:20:47.762 fused_ordering(401) 00:20:47.762 fused_ordering(402) 00:20:47.762 fused_ordering(403) 00:20:47.762 fused_ordering(404) 00:20:47.762 fused_ordering(405) 00:20:47.762 fused_ordering(406) 00:20:47.762 fused_ordering(407) 00:20:47.762 fused_ordering(408) 00:20:47.762 fused_ordering(409) 00:20:47.762 fused_ordering(410) 00:20:47.762 fused_ordering(411) 00:20:47.762 fused_ordering(412) 00:20:47.762 fused_ordering(413) 00:20:47.762 fused_ordering(414) 00:20:47.762 fused_ordering(415) 00:20:47.762 fused_ordering(416) 00:20:47.762 fused_ordering(417) 00:20:47.762 fused_ordering(418) 00:20:47.762 fused_ordering(419) 00:20:47.762 fused_ordering(420) 00:20:47.762 fused_ordering(421) 00:20:47.762 fused_ordering(422) 00:20:47.762 fused_ordering(423) 00:20:47.762 fused_ordering(424) 00:20:47.762 fused_ordering(425) 00:20:47.762 fused_ordering(426) 00:20:47.762 fused_ordering(427) 00:20:47.762 fused_ordering(428) 00:20:47.762 fused_ordering(429) 00:20:47.762 fused_ordering(430) 00:20:47.762 fused_ordering(431) 00:20:47.762 fused_ordering(432) 00:20:47.762 fused_ordering(433) 00:20:47.762 fused_ordering(434) 00:20:47.762 fused_ordering(435) 00:20:47.762 fused_ordering(436) 00:20:47.762 fused_ordering(437) 00:20:47.762 fused_ordering(438) 00:20:47.762 fused_ordering(439) 00:20:47.762 fused_ordering(440) 00:20:47.762 fused_ordering(441) 00:20:47.762 fused_ordering(442) 00:20:47.762 fused_ordering(443) 00:20:47.762 fused_ordering(444) 00:20:47.762 fused_ordering(445) 00:20:47.762 fused_ordering(446) 00:20:47.762 fused_ordering(447) 00:20:47.762 fused_ordering(448) 00:20:47.762 fused_ordering(449) 00:20:47.762 fused_ordering(450) 00:20:47.762 fused_ordering(451) 00:20:47.762 fused_ordering(452) 00:20:47.762 fused_ordering(453) 00:20:47.762 fused_ordering(454) 00:20:47.762 fused_ordering(455) 00:20:47.762 fused_ordering(456) 00:20:47.762 fused_ordering(457) 00:20:47.762 fused_ordering(458) 00:20:47.762 fused_ordering(459) 00:20:47.762 fused_ordering(460) 00:20:47.762 fused_ordering(461) 00:20:47.762 fused_ordering(462) 00:20:47.762 fused_ordering(463) 00:20:47.762 fused_ordering(464) 00:20:47.762 fused_ordering(465) 00:20:47.762 fused_ordering(466) 00:20:47.762 fused_ordering(467) 00:20:47.762 fused_ordering(468) 00:20:47.762 fused_ordering(469) 00:20:47.762 fused_ordering(470) 00:20:47.762 fused_ordering(471) 00:20:47.762 fused_ordering(472) 00:20:47.762 fused_ordering(473) 00:20:47.762 fused_ordering(474) 00:20:47.762 fused_ordering(475) 00:20:47.762 fused_ordering(476) 00:20:47.762 fused_ordering(477) 00:20:47.762 fused_ordering(478) 00:20:47.762 fused_ordering(479) 00:20:47.762 fused_ordering(480) 00:20:47.762 fused_ordering(481) 00:20:47.762 fused_ordering(482) 00:20:47.762 fused_ordering(483) 00:20:47.762 fused_ordering(484) 00:20:47.762 fused_ordering(485) 00:20:47.762 fused_ordering(486) 00:20:47.762 fused_ordering(487) 00:20:47.762 fused_ordering(488) 00:20:47.762 fused_ordering(489) 00:20:47.762 fused_ordering(490) 00:20:47.762 fused_ordering(491) 00:20:47.762 fused_ordering(492) 00:20:47.762 fused_ordering(493) 00:20:47.762 fused_ordering(494) 00:20:47.762 fused_ordering(495) 00:20:47.762 fused_ordering(496) 00:20:47.762 fused_ordering(497) 00:20:47.762 fused_ordering(498) 00:20:47.762 fused_ordering(499) 00:20:47.762 fused_ordering(500) 00:20:47.762 fused_ordering(501) 00:20:47.762 fused_ordering(502) 00:20:47.762 fused_ordering(503) 00:20:47.762 fused_ordering(504) 00:20:47.762 fused_ordering(505) 00:20:47.762 fused_ordering(506) 00:20:47.762 fused_ordering(507) 00:20:47.762 fused_ordering(508) 00:20:47.762 fused_ordering(509) 00:20:47.762 fused_ordering(510) 00:20:47.762 fused_ordering(511) 00:20:47.762 fused_ordering(512) 00:20:47.762 fused_ordering(513) 00:20:47.762 fused_ordering(514) 00:20:47.762 fused_ordering(515) 00:20:47.762 fused_ordering(516) 00:20:47.762 fused_ordering(517) 00:20:47.762 fused_ordering(518) 00:20:47.762 fused_ordering(519) 00:20:47.762 fused_ordering(520) 00:20:47.762 fused_ordering(521) 00:20:47.762 fused_ordering(522) 00:20:47.762 fused_ordering(523) 00:20:47.762 fused_ordering(524) 00:20:47.762 fused_ordering(525) 00:20:47.762 fused_ordering(526) 00:20:47.762 fused_ordering(527) 00:20:47.762 fused_ordering(528) 00:20:47.762 fused_ordering(529) 00:20:47.762 fused_ordering(530) 00:20:47.762 fused_ordering(531) 00:20:47.762 fused_ordering(532) 00:20:47.762 fused_ordering(533) 00:20:47.762 fused_ordering(534) 00:20:47.762 fused_ordering(535) 00:20:47.762 fused_ordering(536) 00:20:47.762 fused_ordering(537) 00:20:47.762 fused_ordering(538) 00:20:47.762 fused_ordering(539) 00:20:47.762 fused_ordering(540) 00:20:47.762 fused_ordering(541) 00:20:47.762 fused_ordering(542) 00:20:47.762 fused_ordering(543) 00:20:47.762 fused_ordering(544) 00:20:47.762 fused_ordering(545) 00:20:47.762 fused_ordering(546) 00:20:47.762 fused_ordering(547) 00:20:47.762 fused_ordering(548) 00:20:47.762 fused_ordering(549) 00:20:47.762 fused_ordering(550) 00:20:47.762 fused_ordering(551) 00:20:47.763 fused_ordering(552) 00:20:47.763 fused_ordering(553) 00:20:47.763 fused_ordering(554) 00:20:47.763 fused_ordering(555) 00:20:47.763 fused_ordering(556) 00:20:47.763 fused_ordering(557) 00:20:47.763 fused_ordering(558) 00:20:47.763 fused_ordering(559) 00:20:47.763 fused_ordering(560) 00:20:47.763 fused_ordering(561) 00:20:47.763 fused_ordering(562) 00:20:47.763 fused_ordering(563) 00:20:47.763 fused_ordering(564) 00:20:47.763 fused_ordering(565) 00:20:47.763 fused_ordering(566) 00:20:47.763 fused_ordering(567) 00:20:47.763 fused_ordering(568) 00:20:47.763 fused_ordering(569) 00:20:47.763 fused_ordering(570) 00:20:47.763 fused_ordering(571) 00:20:47.763 fused_ordering(572) 00:20:47.763 fused_ordering(573) 00:20:47.763 fused_ordering(574) 00:20:47.763 fused_ordering(575) 00:20:47.763 fused_ordering(576) 00:20:47.763 fused_ordering(577) 00:20:47.763 fused_ordering(578) 00:20:47.763 fused_ordering(579) 00:20:47.763 fused_ordering(580) 00:20:47.763 fused_ordering(581) 00:20:47.763 fused_ordering(582) 00:20:47.763 fused_ordering(583) 00:20:47.763 fused_ordering(584) 00:20:47.763 fused_ordering(585) 00:20:47.763 fused_ordering(586) 00:20:47.763 fused_ordering(587) 00:20:47.763 fused_ordering(588) 00:20:47.763 fused_ordering(589) 00:20:47.763 fused_ordering(590) 00:20:47.763 fused_ordering(591) 00:20:47.763 fused_ordering(592) 00:20:47.763 fused_ordering(593) 00:20:47.763 fused_ordering(594) 00:20:47.763 fused_ordering(595) 00:20:47.763 fused_ordering(596) 00:20:47.763 fused_ordering(597) 00:20:47.763 fused_ordering(598) 00:20:47.763 fused_ordering(599) 00:20:47.763 fused_ordering(600) 00:20:47.763 fused_ordering(601) 00:20:47.763 fused_ordering(602) 00:20:47.763 fused_ordering(603) 00:20:47.763 fused_ordering(604) 00:20:47.763 fused_ordering(605) 00:20:47.763 fused_ordering(606) 00:20:47.763 fused_ordering(607) 00:20:47.763 fused_ordering(608) 00:20:47.763 fused_ordering(609) 00:20:47.763 fused_ordering(610) 00:20:47.763 fused_ordering(611) 00:20:47.763 fused_ordering(612) 00:20:47.763 fused_ordering(613) 00:20:47.763 fused_ordering(614) 00:20:47.763 fused_ordering(615) 00:20:48.023 fused_ordering(616) 00:20:48.023 fused_ordering(617) 00:20:48.023 fused_ordering(618) 00:20:48.023 fused_ordering(619) 00:20:48.023 fused_ordering(620) 00:20:48.023 fused_ordering(621) 00:20:48.023 fused_ordering(622) 00:20:48.023 fused_ordering(623) 00:20:48.023 fused_ordering(624) 00:20:48.023 fused_ordering(625) 00:20:48.023 fused_ordering(626) 00:20:48.023 fused_ordering(627) 00:20:48.023 fused_ordering(628) 00:20:48.023 fused_ordering(629) 00:20:48.023 fused_ordering(630) 00:20:48.023 fused_ordering(631) 00:20:48.023 fused_ordering(632) 00:20:48.023 fused_ordering(633) 00:20:48.023 fused_ordering(634) 00:20:48.023 fused_ordering(635) 00:20:48.023 fused_ordering(636) 00:20:48.023 fused_ordering(637) 00:20:48.023 fused_ordering(638) 00:20:48.023 fused_ordering(639) 00:20:48.023 fused_ordering(640) 00:20:48.023 fused_ordering(641) 00:20:48.023 fused_ordering(642) 00:20:48.023 fused_ordering(643) 00:20:48.023 fused_ordering(644) 00:20:48.023 fused_ordering(645) 00:20:48.023 fused_ordering(646) 00:20:48.023 fused_ordering(647) 00:20:48.023 fused_ordering(648) 00:20:48.023 fused_ordering(649) 00:20:48.023 fused_ordering(650) 00:20:48.023 fused_ordering(651) 00:20:48.023 fused_ordering(652) 00:20:48.023 fused_ordering(653) 00:20:48.023 fused_ordering(654) 00:20:48.023 fused_ordering(655) 00:20:48.023 fused_ordering(656) 00:20:48.023 fused_ordering(657) 00:20:48.023 fused_ordering(658) 00:20:48.023 fused_ordering(659) 00:20:48.023 fused_ordering(660) 00:20:48.023 fused_ordering(661) 00:20:48.023 fused_ordering(662) 00:20:48.023 fused_ordering(663) 00:20:48.023 fused_ordering(664) 00:20:48.023 fused_ordering(665) 00:20:48.023 fused_ordering(666) 00:20:48.023 fused_ordering(667) 00:20:48.023 fused_ordering(668) 00:20:48.023 fused_ordering(669) 00:20:48.023 fused_ordering(670) 00:20:48.023 fused_ordering(671) 00:20:48.023 fused_ordering(672) 00:20:48.023 fused_ordering(673) 00:20:48.023 fused_ordering(674) 00:20:48.023 fused_ordering(675) 00:20:48.023 fused_ordering(676) 00:20:48.023 fused_ordering(677) 00:20:48.023 fused_ordering(678) 00:20:48.023 fused_ordering(679) 00:20:48.023 fused_ordering(680) 00:20:48.023 fused_ordering(681) 00:20:48.023 fused_ordering(682) 00:20:48.023 fused_ordering(683) 00:20:48.023 fused_ordering(684) 00:20:48.023 fused_ordering(685) 00:20:48.023 fused_ordering(686) 00:20:48.023 fused_ordering(687) 00:20:48.023 fused_ordering(688) 00:20:48.023 fused_ordering(689) 00:20:48.023 fused_ordering(690) 00:20:48.023 fused_ordering(691) 00:20:48.023 fused_ordering(692) 00:20:48.023 fused_ordering(693) 00:20:48.023 fused_ordering(694) 00:20:48.023 fused_ordering(695) 00:20:48.023 fused_ordering(696) 00:20:48.023 fused_ordering(697) 00:20:48.023 fused_ordering(698) 00:20:48.023 fused_ordering(699) 00:20:48.023 fused_ordering(700) 00:20:48.023 fused_ordering(701) 00:20:48.023 fused_ordering(702) 00:20:48.023 fused_ordering(703) 00:20:48.023 fused_ordering(704) 00:20:48.023 fused_ordering(705) 00:20:48.023 fused_ordering(706) 00:20:48.023 fused_ordering(707) 00:20:48.023 fused_ordering(708) 00:20:48.023 fused_ordering(709) 00:20:48.023 fused_ordering(710) 00:20:48.023 fused_ordering(711) 00:20:48.023 fused_ordering(712) 00:20:48.023 fused_ordering(713) 00:20:48.023 fused_ordering(714) 00:20:48.023 fused_ordering(715) 00:20:48.023 fused_ordering(716) 00:20:48.023 fused_ordering(717) 00:20:48.023 fused_ordering(718) 00:20:48.023 fused_ordering(719) 00:20:48.023 fused_ordering(720) 00:20:48.023 fused_ordering(721) 00:20:48.023 fused_ordering(722) 00:20:48.023 fused_ordering(723) 00:20:48.023 fused_ordering(724) 00:20:48.023 fused_ordering(725) 00:20:48.023 fused_ordering(726) 00:20:48.023 fused_ordering(727) 00:20:48.023 fused_ordering(728) 00:20:48.023 fused_ordering(729) 00:20:48.023 fused_ordering(730) 00:20:48.023 fused_ordering(731) 00:20:48.023 fused_ordering(732) 00:20:48.023 fused_ordering(733) 00:20:48.023 fused_ordering(734) 00:20:48.023 fused_ordering(735) 00:20:48.023 fused_ordering(736) 00:20:48.023 fused_ordering(737) 00:20:48.023 fused_ordering(738) 00:20:48.023 fused_ordering(739) 00:20:48.023 fused_ordering(740) 00:20:48.023 fused_ordering(741) 00:20:48.023 fused_ordering(742) 00:20:48.023 fused_ordering(743) 00:20:48.023 fused_ordering(744) 00:20:48.023 fused_ordering(745) 00:20:48.023 fused_ordering(746) 00:20:48.023 fused_ordering(747) 00:20:48.023 fused_ordering(748) 00:20:48.023 fused_ordering(749) 00:20:48.023 fused_ordering(750) 00:20:48.023 fused_ordering(751) 00:20:48.023 fused_ordering(752) 00:20:48.023 fused_ordering(753) 00:20:48.023 fused_ordering(754) 00:20:48.023 fused_ordering(755) 00:20:48.023 fused_ordering(756) 00:20:48.023 fused_ordering(757) 00:20:48.023 fused_ordering(758) 00:20:48.023 fused_ordering(759) 00:20:48.023 fused_ordering(760) 00:20:48.023 fused_ordering(761) 00:20:48.023 fused_ordering(762) 00:20:48.023 fused_ordering(763) 00:20:48.023 fused_ordering(764) 00:20:48.023 fused_ordering(765) 00:20:48.023 fused_ordering(766) 00:20:48.023 fused_ordering(767) 00:20:48.023 fused_ordering(768) 00:20:48.023 fused_ordering(769) 00:20:48.023 fused_ordering(770) 00:20:48.023 fused_ordering(771) 00:20:48.023 fused_ordering(772) 00:20:48.023 fused_ordering(773) 00:20:48.024 fused_ordering(774) 00:20:48.024 fused_ordering(775) 00:20:48.024 fused_ordering(776) 00:20:48.024 fused_ordering(777) 00:20:48.024 fused_ordering(778) 00:20:48.024 fused_ordering(779) 00:20:48.024 fused_ordering(780) 00:20:48.024 fused_ordering(781) 00:20:48.024 fused_ordering(782) 00:20:48.024 fused_ordering(783) 00:20:48.024 fused_ordering(784) 00:20:48.024 fused_ordering(785) 00:20:48.024 fused_ordering(786) 00:20:48.024 fused_ordering(787) 00:20:48.024 fused_ordering(788) 00:20:48.024 fused_ordering(789) 00:20:48.024 fused_ordering(790) 00:20:48.024 fused_ordering(791) 00:20:48.024 fused_ordering(792) 00:20:48.024 fused_ordering(793) 00:20:48.024 fused_ordering(794) 00:20:48.024 fused_ordering(795) 00:20:48.024 fused_ordering(796) 00:20:48.024 fused_ordering(797) 00:20:48.024 fused_ordering(798) 00:20:48.024 fused_ordering(799) 00:20:48.024 fused_ordering(800) 00:20:48.024 fused_ordering(801) 00:20:48.024 fused_ordering(802) 00:20:48.024 fused_ordering(803) 00:20:48.024 fused_ordering(804) 00:20:48.024 fused_ordering(805) 00:20:48.024 fused_ordering(806) 00:20:48.024 fused_ordering(807) 00:20:48.024 fused_ordering(808) 00:20:48.024 fused_ordering(809) 00:20:48.024 fused_ordering(810) 00:20:48.024 fused_ordering(811) 00:20:48.024 fused_ordering(812) 00:20:48.024 fused_ordering(813) 00:20:48.024 fused_ordering(814) 00:20:48.024 fused_ordering(815) 00:20:48.024 fused_ordering(816) 00:20:48.024 fused_ordering(817) 00:20:48.024 fused_ordering(818) 00:20:48.024 fused_ordering(819) 00:20:48.024 fused_ordering(820) 00:20:48.283 fused_ordering(821) 00:20:48.283 fused_ordering(822) 00:20:48.283 fused_ordering(823) 00:20:48.283 fused_ordering(824) 00:20:48.283 fused_ordering(825) 00:20:48.283 fused_ordering(826) 00:20:48.283 fused_ordering(827) 00:20:48.283 fused_ordering(828) 00:20:48.283 fused_ordering(829) 00:20:48.283 fused_ordering(830) 00:20:48.283 fused_ordering(831) 00:20:48.283 fused_ordering(832) 00:20:48.283 fused_ordering(833) 00:20:48.283 fused_ordering(834) 00:20:48.283 fused_ordering(835) 00:20:48.283 fused_ordering(836) 00:20:48.283 fused_ordering(837) 00:20:48.283 fused_ordering(838) 00:20:48.283 fused_ordering(839) 00:20:48.283 fused_ordering(840) 00:20:48.283 fused_ordering(841) 00:20:48.283 fused_ordering(842) 00:20:48.283 fused_ordering(843) 00:20:48.283 fused_ordering(844) 00:20:48.283 fused_ordering(845) 00:20:48.283 fused_ordering(846) 00:20:48.283 fused_ordering(847) 00:20:48.283 fused_ordering(848) 00:20:48.283 fused_ordering(849) 00:20:48.283 fused_ordering(850) 00:20:48.283 fused_ordering(851) 00:20:48.283 fused_ordering(852) 00:20:48.283 fused_ordering(853) 00:20:48.283 fused_ordering(854) 00:20:48.283 fused_ordering(855) 00:20:48.283 fused_ordering(856) 00:20:48.283 fused_ordering(857) 00:20:48.283 fused_ordering(858) 00:20:48.283 fused_ordering(859) 00:20:48.283 fused_ordering(860) 00:20:48.283 fused_ordering(861) 00:20:48.283 fused_ordering(862) 00:20:48.283 fused_ordering(863) 00:20:48.284 fused_ordering(864) 00:20:48.284 fused_ordering(865) 00:20:48.284 fused_ordering(866) 00:20:48.284 fused_ordering(867) 00:20:48.284 fused_ordering(868) 00:20:48.284 fused_ordering(869) 00:20:48.284 fused_ordering(870) 00:20:48.284 fused_ordering(871) 00:20:48.284 fused_ordering(872) 00:20:48.284 fused_ordering(873) 00:20:48.284 fused_ordering(874) 00:20:48.284 fused_ordering(875) 00:20:48.284 fused_ordering(876) 00:20:48.284 fused_ordering(877) 00:20:48.284 fused_ordering(878) 00:20:48.284 fused_ordering(879) 00:20:48.284 fused_ordering(880) 00:20:48.284 fused_ordering(881) 00:20:48.284 fused_ordering(882) 00:20:48.284 fused_ordering(883) 00:20:48.284 fused_ordering(884) 00:20:48.284 fused_ordering(885) 00:20:48.284 fused_ordering(886) 00:20:48.284 fused_ordering(887) 00:20:48.284 fused_ordering(888) 00:20:48.284 fused_ordering(889) 00:20:48.284 fused_ordering(890) 00:20:48.284 fused_ordering(891) 00:20:48.284 fused_ordering(892) 00:20:48.284 fused_ordering(893) 00:20:48.284 fused_ordering(894) 00:20:48.284 fused_ordering(895) 00:20:48.284 fused_ordering(896) 00:20:48.284 fused_ordering(897) 00:20:48.284 fused_ordering(898) 00:20:48.284 fused_ordering(899) 00:20:48.284 fused_ordering(900) 00:20:48.284 fused_ordering(901) 00:20:48.284 fused_ordering(902) 00:20:48.284 fused_ordering(903) 00:20:48.284 fused_ordering(904) 00:20:48.284 fused_ordering(905) 00:20:48.284 fused_ordering(906) 00:20:48.284 fused_ordering(907) 00:20:48.284 fused_ordering(908) 00:20:48.284 fused_ordering(909) 00:20:48.284 fused_ordering(910) 00:20:48.284 fused_ordering(911) 00:20:48.284 fused_ordering(912) 00:20:48.284 fused_ordering(913) 00:20:48.284 fused_ordering(914) 00:20:48.284 fused_ordering(915) 00:20:48.284 fused_ordering(916) 00:20:48.284 fused_ordering(917) 00:20:48.284 fused_ordering(918) 00:20:48.284 fused_ordering(919) 00:20:48.284 fused_ordering(920) 00:20:48.284 fused_ordering(921) 00:20:48.284 fused_ordering(922) 00:20:48.284 fused_ordering(923) 00:20:48.284 fused_ordering(924) 00:20:48.284 fused_ordering(925) 00:20:48.284 fused_ordering(926) 00:20:48.284 fused_ordering(927) 00:20:48.284 fused_ordering(928) 00:20:48.284 fused_ordering(929) 00:20:48.284 fused_ordering(930) 00:20:48.284 fused_ordering(931) 00:20:48.284 fused_ordering(932) 00:20:48.284 fused_ordering(933) 00:20:48.284 fused_ordering(934) 00:20:48.284 fused_ordering(935) 00:20:48.284 fused_ordering(936) 00:20:48.284 fused_ordering(937) 00:20:48.284 fused_ordering(938) 00:20:48.284 fused_ordering(939) 00:20:48.284 fused_ordering(940) 00:20:48.284 fused_ordering(941) 00:20:48.284 fused_ordering(942) 00:20:48.284 fused_ordering(943) 00:20:48.284 fused_ordering(944) 00:20:48.284 fused_ordering(945) 00:20:48.284 fused_ordering(946) 00:20:48.284 fused_ordering(947) 00:20:48.284 fused_ordering(948) 00:20:48.284 fused_ordering(949) 00:20:48.284 fused_ordering(950) 00:20:48.284 fused_ordering(951) 00:20:48.284 fused_ordering(952) 00:20:48.284 fused_ordering(953) 00:20:48.284 fused_ordering(954) 00:20:48.284 fused_ordering(955) 00:20:48.284 fused_ordering(956) 00:20:48.284 fused_ordering(957) 00:20:48.284 fused_ordering(958) 00:20:48.284 fused_ordering(959) 00:20:48.284 fused_ordering(960) 00:20:48.284 fused_ordering(961) 00:20:48.284 fused_ordering(962) 00:20:48.284 fused_ordering(963) 00:20:48.284 fused_ordering(964) 00:20:48.284 fused_ordering(965) 00:20:48.284 fused_ordering(966) 00:20:48.284 fused_ordering(967) 00:20:48.284 fused_ordering(968) 00:20:48.284 fused_ordering(969) 00:20:48.284 fused_ordering(970) 00:20:48.284 fused_ordering(971) 00:20:48.284 fused_ordering(972) 00:20:48.284 fused_ordering(973) 00:20:48.284 fused_ordering(974) 00:20:48.284 fused_ordering(975) 00:20:48.284 fused_ordering(976) 00:20:48.284 fused_ordering(977) 00:20:48.284 fused_ordering(978) 00:20:48.284 fused_ordering(979) 00:20:48.284 fused_ordering(980) 00:20:48.284 fused_ordering(981) 00:20:48.284 fused_ordering(982) 00:20:48.284 fused_ordering(983) 00:20:48.284 fused_ordering(984) 00:20:48.284 fused_ordering(985) 00:20:48.284 fused_ordering(986) 00:20:48.284 fused_ordering(987) 00:20:48.284 fused_ordering(988) 00:20:48.284 fused_ordering(989) 00:20:48.284 fused_ordering(990) 00:20:48.284 fused_ordering(991) 00:20:48.284 fused_ordering(992) 00:20:48.284 fused_ordering(993) 00:20:48.284 fused_ordering(994) 00:20:48.284 fused_ordering(995) 00:20:48.284 fused_ordering(996) 00:20:48.284 fused_ordering(997) 00:20:48.284 fused_ordering(998) 00:20:48.284 fused_ordering(999) 00:20:48.284 fused_ordering(1000) 00:20:48.284 fused_ordering(1001) 00:20:48.284 fused_ordering(1002) 00:20:48.284 fused_ordering(1003) 00:20:48.284 fused_ordering(1004) 00:20:48.284 fused_ordering(1005) 00:20:48.284 fused_ordering(1006) 00:20:48.284 fused_ordering(1007) 00:20:48.284 fused_ordering(1008) 00:20:48.284 fused_ordering(1009) 00:20:48.284 fused_ordering(1010) 00:20:48.284 fused_ordering(1011) 00:20:48.284 fused_ordering(1012) 00:20:48.284 fused_ordering(1013) 00:20:48.284 fused_ordering(1014) 00:20:48.284 fused_ordering(1015) 00:20:48.284 fused_ordering(1016) 00:20:48.284 fused_ordering(1017) 00:20:48.284 fused_ordering(1018) 00:20:48.284 fused_ordering(1019) 00:20:48.284 fused_ordering(1020) 00:20:48.284 fused_ordering(1021) 00:20:48.284 fused_ordering(1022) 00:20:48.284 fused_ordering(1023) 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:48.284 rmmod nvme_rdma 00:20:48.284 rmmod nvme_fabrics 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3363745 ']' 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3363745 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3363745 ']' 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3363745 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3363745 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3363745' 00:20:48.284 killing process with pid 3363745 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3363745 00:20:48.284 05:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3363745 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:49.662 00:20:49.662 real 0m12.422s 00:20:49.662 user 0m6.787s 00:20:49.662 sys 0m7.335s 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:49.662 ************************************ 00:20:49.662 END TEST nvmf_fused_ordering 00:20:49.662 ************************************ 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:49.662 ************************************ 00:20:49.662 START TEST nvmf_ns_masking 00:20:49.662 ************************************ 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:20:49.662 * Looking for test storage... 00:20:49.662 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:20:49.662 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:49.921 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:49.921 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:49.921 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:49.921 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:49.921 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.921 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:20:49.921 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:20:49.921 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:20:49.921 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:20:49.921 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:49.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.922 --rc genhtml_branch_coverage=1 00:20:49.922 --rc genhtml_function_coverage=1 00:20:49.922 --rc genhtml_legend=1 00:20:49.922 --rc geninfo_all_blocks=1 00:20:49.922 --rc geninfo_unexecuted_blocks=1 00:20:49.922 00:20:49.922 ' 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:49.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.922 --rc genhtml_branch_coverage=1 00:20:49.922 --rc genhtml_function_coverage=1 00:20:49.922 --rc genhtml_legend=1 00:20:49.922 --rc geninfo_all_blocks=1 00:20:49.922 --rc geninfo_unexecuted_blocks=1 00:20:49.922 00:20:49.922 ' 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:49.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.922 --rc genhtml_branch_coverage=1 00:20:49.922 --rc genhtml_function_coverage=1 00:20:49.922 --rc genhtml_legend=1 00:20:49.922 --rc geninfo_all_blocks=1 00:20:49.922 --rc geninfo_unexecuted_blocks=1 00:20:49.922 00:20:49.922 ' 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:49.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.922 --rc genhtml_branch_coverage=1 00:20:49.922 --rc genhtml_function_coverage=1 00:20:49.922 --rc genhtml_legend=1 00:20:49.922 --rc geninfo_all_blocks=1 00:20:49.922 --rc geninfo_unexecuted_blocks=1 00:20:49.922 00:20:49.922 ' 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:49.922 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7204a79d-0362-46a3-9926-8748613aec03 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c293a18b-c46c-473e-bf46-46ccb5e29348 00:20:49.922 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5214deb7-d26e-4286-988e-15f9dc2c9df5 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:20:49.923 05:37:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:58.184 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:58.184 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:58.184 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.184 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:58.184 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:58.185 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.185 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:58.185 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:20:58.185 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:58.185 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:58.185 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:58.185 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:58.443 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:58.444 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:58.444 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:58.444 altname enp217s0f0np0 00:20:58.444 altname ens818f0np0 00:20:58.444 inet 192.168.100.8/24 scope global mlx_0_0 00:20:58.444 valid_lft forever preferred_lft forever 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:58.444 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:58.444 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:58.444 altname enp217s0f1np1 00:20:58.444 altname ens818f1np1 00:20:58.444 inet 192.168.100.9/24 scope global mlx_0_1 00:20:58.444 valid_lft forever preferred_lft forever 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:58.444 192.168.100.9' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:58.444 192.168.100.9' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:58.444 192.168.100.9' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:58.444 05:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:58.444 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:20:58.444 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:58.444 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.444 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:58.444 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3368470 00:20:58.444 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:58.444 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3368470 00:20:58.444 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3368470 ']' 00:20:58.445 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.445 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.445 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.445 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.445 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:58.702 [2024-11-27 05:37:55.108856] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:58.702 [2024-11-27 05:37:55.108967] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.702 [2024-11-27 05:37:55.261050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.961 [2024-11-27 05:37:55.360334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.961 [2024-11-27 05:37:55.360386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.961 [2024-11-27 05:37:55.360398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.961 [2024-11-27 05:37:55.360412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.961 [2024-11-27 05:37:55.360422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.961 [2024-11-27 05:37:55.361904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.527 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.527 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:20:59.527 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.527 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.527 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:59.527 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.527 05:37:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:59.785 [2024-11-27 05:37:56.151313] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7efcd5dbd940) succeed. 00:20:59.785 [2024-11-27 05:37:56.160266] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7efcd5d79940) succeed. 00:20:59.785 05:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:20:59.785 05:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:20:59.785 05:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:00.043 Malloc1 00:21:00.044 05:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:00.301 Malloc2 00:21:00.301 05:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:00.559 05:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:21:00.559 05:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:00.817 [2024-11-27 05:37:57.270699] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:00.817 05:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:21:00.817 05:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5214deb7-d26e-4286-988e-15f9dc2c9df5 -a 192.168.100.8 -s 4420 -i 4 00:21:01.076 05:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:21:01.076 05:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:21:01.076 05:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:01.076 05:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:21:01.076 05:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:03.606 [ 0]:0x1 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=337058703375438f9642764bec7496cf 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 337058703375438f9642764bec7496cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:03.606 [ 0]:0x1 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=337058703375438f9642764bec7496cf 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 337058703375438f9642764bec7496cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:03.606 [ 1]:0x2 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc12f7572ae444f19e66f1eee22d3a51 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc12f7572ae444f19e66f1eee22d3a51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:21:03.606 05:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:03.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:03.864 05:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:04.122 05:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:21:04.381 05:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:21:04.381 05:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5214deb7-d26e-4286-988e-15f9dc2c9df5 -a 192.168.100.8 -s 4420 -i 4 00:21:04.639 05:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:21:04.639 05:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:21:04.639 05:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:04.639 05:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:21:04.639 05:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:21:04.639 05:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.536 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:06.795 [ 0]:0x2 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc12f7572ae444f19e66f1eee22d3a51 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc12f7572ae444f19e66f1eee22d3a51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:06.795 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:07.053 [ 0]:0x1 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=337058703375438f9642764bec7496cf 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 337058703375438f9642764bec7496cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:07.053 [ 1]:0x2 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc12f7572ae444f19e66f1eee22d3a51 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc12f7572ae444f19e66f1eee22d3a51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:07.053 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:07.311 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:21:07.311 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:07.311 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:07.312 [ 0]:0x2 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc12f7572ae444f19e66f1eee22d3a51 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc12f7572ae444f19e66f1eee22d3a51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:21:07.312 05:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:07.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:07.569 05:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:07.828 05:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:21:07.828 05:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5214deb7-d26e-4286-988e-15f9dc2c9df5 -a 192.168.100.8 -s 4420 -i 4 00:21:08.085 05:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:08.086 05:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:21:08.086 05:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:08.086 05:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:21:08.086 05:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:21:08.086 05:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:10.614 [ 0]:0x1 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=337058703375438f9642764bec7496cf 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 337058703375438f9642764bec7496cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.614 [ 1]:0x2 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc12f7572ae444f19e66f1eee22d3a51 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc12f7572ae444f19e66f1eee22d3a51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:10.614 05:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.614 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:10.614 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.614 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.615 [ 0]:0x2 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc12f7572ae444f19e66f1eee22d3a51 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc12f7572ae444f19e66f1eee22d3a51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:21:10.615 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:10.874 [2024-11-27 05:38:07.253031] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:21:10.874 request: 00:21:10.874 { 00:21:10.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.874 "nsid": 2, 00:21:10.874 "host": "nqn.2016-06.io.spdk:host1", 00:21:10.874 "method": "nvmf_ns_remove_host", 00:21:10.874 "req_id": 1 00:21:10.874 } 00:21:10.874 Got JSON-RPC error response 00:21:10.874 response: 00:21:10.874 { 00:21:10.874 "code": -32602, 00:21:10.874 "message": "Invalid parameters" 00:21:10.874 } 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:10.874 [ 0]:0x2 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc12f7572ae444f19e66f1eee22d3a51 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc12f7572ae444f19e66f1eee22d3a51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:21:10.874 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:11.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:11.132 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3370754 00:21:11.132 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.132 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:21:11.132 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3370754 /var/tmp/host.sock 00:21:11.132 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3370754 ']' 00:21:11.132 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:11.132 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.132 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:11.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:11.132 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.132 05:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:11.390 [2024-11-27 05:38:07.779698] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:11.390 [2024-11-27 05:38:07.779818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370754 ] 00:21:11.390 [2024-11-27 05:38:07.932689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.648 [2024-11-27 05:38:08.032821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.213 05:38:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.213 05:38:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:21:12.213 05:38:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:12.471 05:38:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:12.730 05:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7204a79d-0362-46a3-9926-8748613aec03 00:21:12.730 05:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:12.730 05:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7204A79D036246A399268748613AEC03 -i 00:21:12.730 05:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c293a18b-c46c-473e-bf46-46ccb5e29348 00:21:12.730 05:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:12.730 05:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C293A18BC46C473EBF4646CCB5E29348 -i 00:21:12.988 05:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:13.246 05:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:21:13.504 05:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:13.504 05:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:13.763 nvme0n1 00:21:13.763 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:13.763 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:14.021 nvme1n2 00:21:14.021 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:21:14.021 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:21:14.021 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:14.021 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:21:14.021 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:21:14.021 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:21:14.280 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:21:14.280 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:21:14.280 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:21:14.280 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7204a79d-0362-46a3-9926-8748613aec03 == \7\2\0\4\a\7\9\d\-\0\3\6\2\-\4\6\a\3\-\9\9\2\6\-\8\7\4\8\6\1\3\a\e\c\0\3 ]] 00:21:14.280 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:21:14.280 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:21:14.280 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:21:14.539 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c293a18b-c46c-473e-bf46-46ccb5e29348 == \c\2\9\3\a\1\8\b\-\c\4\6\c\-\4\7\3\e\-\b\f\4\6\-\4\6\c\c\b\5\e\2\9\3\4\8 ]] 00:21:14.539 05:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7204a79d-0362-46a3-9926-8748613aec03 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7204A79D036246A399268748613AEC03 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7204A79D036246A399268748613AEC03 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:21:14.798 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7204A79D036246A399268748613AEC03 00:21:15.056 [2024-11-27 05:38:11.547677] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:21:15.056 [2024-11-27 05:38:11.547731] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:21:15.056 [2024-11-27 05:38:11.547748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.056 request: 00:21:15.056 { 00:21:15.056 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.056 "namespace": { 00:21:15.056 "bdev_name": "invalid", 00:21:15.056 "nsid": 1, 00:21:15.056 "nguid": "7204A79D036246A399268748613AEC03", 00:21:15.056 "no_auto_visible": false, 00:21:15.056 "hide_metadata": false 00:21:15.056 }, 00:21:15.056 "method": "nvmf_subsystem_add_ns", 00:21:15.056 "req_id": 1 00:21:15.056 } 00:21:15.056 Got JSON-RPC error response 00:21:15.056 response: 00:21:15.056 { 00:21:15.056 "code": -32602, 00:21:15.056 "message": "Invalid parameters" 00:21:15.056 } 00:21:15.056 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:15.056 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.056 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.056 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.056 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7204a79d-0362-46a3-9926-8748613aec03 00:21:15.056 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:15.056 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7204A79D036246A399268748613AEC03 -i 00:21:15.348 05:38:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:21:17.244 05:38:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:21:17.244 05:38:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:21:17.244 05:38:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:17.501 05:38:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:21:17.501 05:38:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3370754 00:21:17.501 05:38:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3370754 ']' 00:21:17.502 05:38:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3370754 00:21:17.502 05:38:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:21:17.502 05:38:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.502 05:38:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3370754 00:21:17.502 05:38:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:17.502 05:38:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:17.502 05:38:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3370754' 00:21:17.502 killing process with pid 3370754 00:21:17.502 05:38:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3370754 00:21:17.502 05:38:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3370754 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:20.033 rmmod nvme_rdma 00:21:20.033 rmmod nvme_fabrics 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3368470 ']' 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3368470 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3368470 ']' 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3368470 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3368470 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3368470' 00:21:20.033 killing process with pid 3368470 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3368470 00:21:20.033 05:38:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3368470 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:21.939 00:21:21.939 real 0m31.978s 00:21:21.939 user 0m39.322s 00:21:21.939 sys 0m9.355s 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:21.939 ************************************ 00:21:21.939 END TEST nvmf_ns_masking 00:21:21.939 ************************************ 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:21.939 ************************************ 00:21:21.939 START TEST nvmf_nvme_cli 00:21:21.939 ************************************ 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:21:21.939 * Looking for test storage... 00:21:21.939 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.939 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:21.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.940 --rc genhtml_branch_coverage=1 00:21:21.940 --rc genhtml_function_coverage=1 00:21:21.940 --rc genhtml_legend=1 00:21:21.940 --rc geninfo_all_blocks=1 00:21:21.940 --rc geninfo_unexecuted_blocks=1 00:21:21.940 00:21:21.940 ' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:21.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.940 --rc genhtml_branch_coverage=1 00:21:21.940 --rc genhtml_function_coverage=1 00:21:21.940 --rc genhtml_legend=1 00:21:21.940 --rc geninfo_all_blocks=1 00:21:21.940 --rc geninfo_unexecuted_blocks=1 00:21:21.940 00:21:21.940 ' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:21.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.940 --rc genhtml_branch_coverage=1 00:21:21.940 --rc genhtml_function_coverage=1 00:21:21.940 --rc genhtml_legend=1 00:21:21.940 --rc geninfo_all_blocks=1 00:21:21.940 --rc geninfo_unexecuted_blocks=1 00:21:21.940 00:21:21.940 ' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:21.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.940 --rc genhtml_branch_coverage=1 00:21:21.940 --rc genhtml_function_coverage=1 00:21:21.940 --rc genhtml_legend=1 00:21:21.940 --rc geninfo_all_blocks=1 00:21:21.940 --rc geninfo_unexecuted_blocks=1 00:21:21.940 00:21:21.940 ' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.940 05:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:21:31.925 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:31.926 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:31.926 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:31.926 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:31.926 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:31.926 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:31.926 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:31.926 altname enp217s0f0np0 00:21:31.926 altname ens818f0np0 00:21:31.926 inet 192.168.100.8/24 scope global mlx_0_0 00:21:31.926 valid_lft forever preferred_lft forever 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:31.926 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:31.927 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:31.927 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:31.927 altname enp217s0f1np1 00:21:31.927 altname ens818f1np1 00:21:31.927 inet 192.168.100.9/24 scope global mlx_0_1 00:21:31.927 valid_lft forever preferred_lft forever 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:31.927 192.168.100.9' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:31.927 192.168.100.9' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:31.927 192.168.100.9' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3376572 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3376572 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3376572 ']' 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.927 05:38:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.927 [2024-11-27 05:38:27.071904] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:31.927 [2024-11-27 05:38:27.072011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.927 [2024-11-27 05:38:27.226823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.927 [2024-11-27 05:38:27.327389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.927 [2024-11-27 05:38:27.327439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.927 [2024-11-27 05:38:27.327452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.927 [2024-11-27 05:38:27.327465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.927 [2024-11-27 05:38:27.327476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.927 [2024-11-27 05:38:27.330020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.927 [2024-11-27 05:38:27.330097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.927 [2024-11-27 05:38:27.330149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.927 [2024-11-27 05:38:27.330157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.927 05:38:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.927 05:38:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:21:31.927 05:38:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:31.927 05:38:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.927 05:38:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.927 05:38:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.927 05:38:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:31.927 05:38:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.927 05:38:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.927 [2024-11-27 05:38:27.979459] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f653fb76940) succeed. 00:21:31.927 [2024-11-27 05:38:27.989414] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f653fb32940) succeed. 00:21:31.927 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.927 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:31.927 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.927 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.927 Malloc0 00:21:31.927 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.927 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:31.927 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.927 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.927 Malloc1 00:21:31.927 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.927 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.928 [2024-11-27 05:38:28.424137] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.928 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:21:32.187 00:21:32.187 Discovery Log Number of Records 2, Generation counter 2 00:21:32.187 =====Discovery Log Entry 0====== 00:21:32.187 trtype: rdma 00:21:32.187 adrfam: ipv4 00:21:32.187 subtype: current discovery subsystem 00:21:32.187 treq: not required 00:21:32.187 portid: 0 00:21:32.187 trsvcid: 4420 00:21:32.187 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:32.187 traddr: 192.168.100.8 00:21:32.187 eflags: explicit discovery connections, duplicate discovery information 00:21:32.187 rdma_prtype: not specified 00:21:32.187 rdma_qptype: connected 00:21:32.187 rdma_cms: rdma-cm 00:21:32.187 rdma_pkey: 0x0000 00:21:32.187 =====Discovery Log Entry 1====== 00:21:32.187 trtype: rdma 00:21:32.187 adrfam: ipv4 00:21:32.187 subtype: nvme subsystem 00:21:32.187 treq: not required 00:21:32.187 portid: 0 00:21:32.187 trsvcid: 4420 00:21:32.187 subnqn: nqn.2016-06.io.spdk:cnode1 00:21:32.187 traddr: 192.168.100.8 00:21:32.187 eflags: none 00:21:32.187 rdma_prtype: not specified 00:21:32.187 rdma_qptype: connected 00:21:32.187 rdma_cms: rdma-cm 00:21:32.187 rdma_pkey: 0x0000 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:21:32.187 05:38:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:33.124 05:38:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:33.124 05:38:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:21:33.124 05:38:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:33.124 05:38:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:21:33.124 05:38:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:21:33.124 05:38:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:21:35.028 /dev/nvme0n2 ]] 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:21:35.028 05:38:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:36.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:36.405 rmmod nvme_rdma 00:21:36.405 rmmod nvme_fabrics 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3376572 ']' 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3376572 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3376572 ']' 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3376572 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3376572 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3376572' 00:21:36.405 killing process with pid 3376572 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3376572 00:21:36.405 05:38:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3376572 00:21:38.311 05:38:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:38.311 05:38:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:38.311 00:21:38.311 real 0m16.599s 00:21:38.311 user 0m30.371s 00:21:38.311 sys 0m7.497s 00:21:38.311 05:38:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.311 05:38:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:38.311 ************************************ 00:21:38.311 END TEST nvmf_nvme_cli 00:21:38.311 ************************************ 00:21:38.311 05:38:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:21:38.311 05:38:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:21:38.311 05:38:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:38.311 05:38:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.311 05:38:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:38.311 ************************************ 00:21:38.311 START TEST nvmf_auth_target 00:21:38.311 ************************************ 00:21:38.311 05:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:21:38.571 * Looking for test storage... 00:21:38.571 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:38.571 05:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:38.571 05:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.571 05:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.571 --rc genhtml_branch_coverage=1 00:21:38.571 --rc genhtml_function_coverage=1 00:21:38.571 --rc genhtml_legend=1 00:21:38.571 --rc geninfo_all_blocks=1 00:21:38.571 --rc geninfo_unexecuted_blocks=1 00:21:38.571 00:21:38.571 ' 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.571 --rc genhtml_branch_coverage=1 00:21:38.571 --rc genhtml_function_coverage=1 00:21:38.571 --rc genhtml_legend=1 00:21:38.571 --rc geninfo_all_blocks=1 00:21:38.571 --rc geninfo_unexecuted_blocks=1 00:21:38.571 00:21:38.571 ' 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.571 --rc genhtml_branch_coverage=1 00:21:38.571 --rc genhtml_function_coverage=1 00:21:38.571 --rc genhtml_legend=1 00:21:38.571 --rc geninfo_all_blocks=1 00:21:38.571 --rc geninfo_unexecuted_blocks=1 00:21:38.571 00:21:38.571 ' 00:21:38.571 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.571 --rc genhtml_branch_coverage=1 00:21:38.571 --rc genhtml_function_coverage=1 00:21:38.571 --rc genhtml_legend=1 00:21:38.571 --rc geninfo_all_blocks=1 00:21:38.571 --rc geninfo_unexecuted_blocks=1 00:21:38.571 00:21:38.571 ' 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.572 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.572 05:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:46.695 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:46.695 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:46.695 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:46.695 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:46.696 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:21:46.696 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:46.955 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:46.955 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:46.955 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:46.956 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:46.956 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:46.956 altname enp217s0f0np0 00:21:46.956 altname ens818f0np0 00:21:46.956 inet 192.168.100.8/24 scope global mlx_0_0 00:21:46.956 valid_lft forever preferred_lft forever 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:46.956 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:46.956 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:46.956 altname enp217s0f1np1 00:21:46.956 altname ens818f1np1 00:21:46.956 inet 192.168.100.9/24 scope global mlx_0_1 00:21:46.956 valid_lft forever preferred_lft forever 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:46.956 192.168.100.9' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:46.956 192.168.100.9' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:46.956 192.168.100.9' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3381900 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3381900 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3381900 ']' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.956 05:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.893 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.893 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:47.893 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.893 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.893 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.893 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.893 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3382140 00:21:47.893 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:47.893 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5d4602950eb4a61b1b861edb34dea94f2a1db65c5a5769b7 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.h6t 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5d4602950eb4a61b1b861edb34dea94f2a1db65c5a5769b7 0 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5d4602950eb4a61b1b861edb34dea94f2a1db65c5a5769b7 0 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5d4602950eb4a61b1b861edb34dea94f2a1db65c5a5769b7 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.h6t 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.h6t 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.h6t 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8806bd176ffc7bdf461e7ab36bf0283e2254da09265b687877c1d2339ec2ce8d 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4Fy 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8806bd176ffc7bdf461e7ab36bf0283e2254da09265b687877c1d2339ec2ce8d 3 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8806bd176ffc7bdf461e7ab36bf0283e2254da09265b687877c1d2339ec2ce8d 3 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8806bd176ffc7bdf461e7ab36bf0283e2254da09265b687877c1d2339ec2ce8d 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:47.894 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4Fy 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4Fy 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.4Fy 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e3db696f099514405a6030d25a99b4ae 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.d8O 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e3db696f099514405a6030d25a99b4ae 1 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e3db696f099514405a6030d25a99b4ae 1 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e3db696f099514405a6030d25a99b4ae 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.d8O 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.d8O 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.d8O 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5f7f6cad6deea8226814a18c2f79043c7500e0d244eae7b8 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3VY 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5f7f6cad6deea8226814a18c2f79043c7500e0d244eae7b8 2 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5f7f6cad6deea8226814a18c2f79043c7500e0d244eae7b8 2 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5f7f6cad6deea8226814a18c2f79043c7500e0d244eae7b8 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3VY 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3VY 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.3VY 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:48.154 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aa3e7abf102fe248bcdbfb10f0a80168cc891937be471646 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gnF 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aa3e7abf102fe248bcdbfb10f0a80168cc891937be471646 2 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aa3e7abf102fe248bcdbfb10f0a80168cc891937be471646 2 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aa3e7abf102fe248bcdbfb10f0a80168cc891937be471646 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gnF 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gnF 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.gnF 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=77d0cf2b34cc116864daf624e2a0eb04 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.I4m 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 77d0cf2b34cc116864daf624e2a0eb04 1 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 77d0cf2b34cc116864daf624e2a0eb04 1 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=77d0cf2b34cc116864daf624e2a0eb04 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:48.155 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.I4m 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.I4m 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.I4m 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a569097095e2493c89ee23972434559d6fca041a1a72270ace6c2532562a7011 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.30F 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a569097095e2493c89ee23972434559d6fca041a1a72270ace6c2532562a7011 3 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a569097095e2493c89ee23972434559d6fca041a1a72270ace6c2532562a7011 3 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a569097095e2493c89ee23972434559d6fca041a1a72270ace6c2532562a7011 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.30F 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.30F 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.30F 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3381900 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3381900 ']' 00:21:48.414 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.415 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.415 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.415 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.415 05:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.674 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.674 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:48.674 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3382140 /var/tmp/host.sock 00:21:48.674 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3382140 ']' 00:21:48.674 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:48.674 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.674 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:48.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:48.674 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.674 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.933 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.933 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:48.933 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:48.933 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.933 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.h6t 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.h6t 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.h6t 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.4Fy ]] 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Fy 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.192 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.451 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.451 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Fy 00:21:49.451 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Fy 00:21:49.451 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:49.451 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.d8O 00:21:49.451 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.451 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.451 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.451 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.d8O 00:21:49.451 05:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.d8O 00:21:49.710 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.3VY ]] 00:21:49.710 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3VY 00:21:49.710 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.710 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.710 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.710 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3VY 00:21:49.710 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3VY 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gnF 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.gnF 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.gnF 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.I4m ]] 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I4m 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I4m 00:21:49.969 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I4m 00:21:50.228 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:50.228 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.30F 00:21:50.228 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.228 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.228 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.228 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.30F 00:21:50.228 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.30F 00:21:50.487 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:50.487 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:50.487 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.487 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.487 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:50.487 05:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:50.487 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.488 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.747 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.006 { 00:21:51.006 "cntlid": 1, 00:21:51.006 "qid": 0, 00:21:51.006 "state": "enabled", 00:21:51.006 "thread": "nvmf_tgt_poll_group_000", 00:21:51.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:51.006 "listen_address": { 00:21:51.006 "trtype": "RDMA", 00:21:51.006 "adrfam": "IPv4", 00:21:51.006 "traddr": "192.168.100.8", 00:21:51.006 "trsvcid": "4420" 00:21:51.006 }, 00:21:51.006 "peer_address": { 00:21:51.006 "trtype": "RDMA", 00:21:51.006 "adrfam": "IPv4", 00:21:51.006 "traddr": "192.168.100.8", 00:21:51.006 "trsvcid": "38858" 00:21:51.006 }, 00:21:51.006 "auth": { 00:21:51.006 "state": "completed", 00:21:51.006 "digest": "sha256", 00:21:51.006 "dhgroup": "null" 00:21:51.006 } 00:21:51.006 } 00:21:51.006 ]' 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:51.006 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.265 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:51.265 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.265 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.265 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.265 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.525 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:21:51.525 05:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:21:52.094 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.094 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:52.094 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.094 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.094 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.094 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.094 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:52.094 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.353 05:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.612 00:21:52.612 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.612 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.612 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.870 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.870 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.870 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.870 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.870 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.870 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.870 { 00:21:52.870 "cntlid": 3, 00:21:52.870 "qid": 0, 00:21:52.870 "state": "enabled", 00:21:52.870 "thread": "nvmf_tgt_poll_group_000", 00:21:52.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:52.870 "listen_address": { 00:21:52.870 "trtype": "RDMA", 00:21:52.870 "adrfam": "IPv4", 00:21:52.870 "traddr": "192.168.100.8", 00:21:52.870 "trsvcid": "4420" 00:21:52.870 }, 00:21:52.870 "peer_address": { 00:21:52.870 "trtype": "RDMA", 00:21:52.870 "adrfam": "IPv4", 00:21:52.870 "traddr": "192.168.100.8", 00:21:52.870 "trsvcid": "36525" 00:21:52.870 }, 00:21:52.870 "auth": { 00:21:52.870 "state": "completed", 00:21:52.870 "digest": "sha256", 00:21:52.870 "dhgroup": "null" 00:21:52.870 } 00:21:52.870 } 00:21:52.870 ]' 00:21:52.870 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.870 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.870 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.871 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:52.871 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.871 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.871 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.871 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.129 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:21:53.129 05:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:21:53.697 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.960 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.223 00:21:54.223 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.223 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.223 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.482 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.482 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.482 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.482 05:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.482 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.482 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.482 { 00:21:54.482 "cntlid": 5, 00:21:54.482 "qid": 0, 00:21:54.482 "state": "enabled", 00:21:54.482 "thread": "nvmf_tgt_poll_group_000", 00:21:54.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:54.482 "listen_address": { 00:21:54.482 "trtype": "RDMA", 00:21:54.482 "adrfam": "IPv4", 00:21:54.482 "traddr": "192.168.100.8", 00:21:54.482 "trsvcid": "4420" 00:21:54.482 }, 00:21:54.482 "peer_address": { 00:21:54.482 "trtype": "RDMA", 00:21:54.482 "adrfam": "IPv4", 00:21:54.482 "traddr": "192.168.100.8", 00:21:54.482 "trsvcid": "48682" 00:21:54.482 }, 00:21:54.482 "auth": { 00:21:54.482 "state": "completed", 00:21:54.482 "digest": "sha256", 00:21:54.482 "dhgroup": "null" 00:21:54.482 } 00:21:54.482 } 00:21:54.482 ]' 00:21:54.482 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.482 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.482 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.741 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:54.741 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.741 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.741 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.741 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.001 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:21:55.001 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:21:55.567 05:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.567 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:55.567 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.567 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.567 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.567 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.567 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:55.567 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.825 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.826 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.826 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.083 00:21:56.083 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.083 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.083 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.342 { 00:21:56.342 "cntlid": 7, 00:21:56.342 "qid": 0, 00:21:56.342 "state": "enabled", 00:21:56.342 "thread": "nvmf_tgt_poll_group_000", 00:21:56.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:56.342 "listen_address": { 00:21:56.342 "trtype": "RDMA", 00:21:56.342 "adrfam": "IPv4", 00:21:56.342 "traddr": "192.168.100.8", 00:21:56.342 "trsvcid": "4420" 00:21:56.342 }, 00:21:56.342 "peer_address": { 00:21:56.342 "trtype": "RDMA", 00:21:56.342 "adrfam": "IPv4", 00:21:56.342 "traddr": "192.168.100.8", 00:21:56.342 "trsvcid": "50704" 00:21:56.342 }, 00:21:56.342 "auth": { 00:21:56.342 "state": "completed", 00:21:56.342 "digest": "sha256", 00:21:56.342 "dhgroup": "null" 00:21:56.342 } 00:21:56.342 } 00:21:56.342 ]' 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.342 05:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.601 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:21:56.601 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:21:57.168 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.168 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:57.168 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.168 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.168 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.168 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.168 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.168 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.169 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.427 05:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.686 00:21:57.686 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.686 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.686 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.944 { 00:21:57.944 "cntlid": 9, 00:21:57.944 "qid": 0, 00:21:57.944 "state": "enabled", 00:21:57.944 "thread": "nvmf_tgt_poll_group_000", 00:21:57.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:57.944 "listen_address": { 00:21:57.944 "trtype": "RDMA", 00:21:57.944 "adrfam": "IPv4", 00:21:57.944 "traddr": "192.168.100.8", 00:21:57.944 "trsvcid": "4420" 00:21:57.944 }, 00:21:57.944 "peer_address": { 00:21:57.944 "trtype": "RDMA", 00:21:57.944 "adrfam": "IPv4", 00:21:57.944 "traddr": "192.168.100.8", 00:21:57.944 "trsvcid": "57846" 00:21:57.944 }, 00:21:57.944 "auth": { 00:21:57.944 "state": "completed", 00:21:57.944 "digest": "sha256", 00:21:57.944 "dhgroup": "ffdhe2048" 00:21:57.944 } 00:21:57.944 } 00:21:57.944 ]' 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.944 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.202 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:21:58.202 05:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:21:58.768 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.026 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:59.026 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.026 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.026 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.026 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.026 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:59.026 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.284 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.284 00:21:59.542 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.542 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.542 05:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.542 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.542 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.542 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.542 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.542 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.542 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.542 { 00:21:59.542 "cntlid": 11, 00:21:59.542 "qid": 0, 00:21:59.542 "state": "enabled", 00:21:59.542 "thread": "nvmf_tgt_poll_group_000", 00:21:59.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:59.542 "listen_address": { 00:21:59.542 "trtype": "RDMA", 00:21:59.542 "adrfam": "IPv4", 00:21:59.542 "traddr": "192.168.100.8", 00:21:59.542 "trsvcid": "4420" 00:21:59.542 }, 00:21:59.542 "peer_address": { 00:21:59.542 "trtype": "RDMA", 00:21:59.542 "adrfam": "IPv4", 00:21:59.542 "traddr": "192.168.100.8", 00:21:59.542 "trsvcid": "43646" 00:21:59.542 }, 00:21:59.542 "auth": { 00:21:59.542 "state": "completed", 00:21:59.542 "digest": "sha256", 00:21:59.542 "dhgroup": "ffdhe2048" 00:21:59.542 } 00:21:59.542 } 00:21:59.542 ]' 00:21:59.542 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.542 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.543 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.801 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:59.801 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.801 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.801 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.801 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.059 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:00.059 05:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:00.625 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.625 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:00.625 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.625 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.625 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.625 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.625 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:00.625 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.883 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.142 00:22:01.142 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.142 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.142 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.409 { 00:22:01.409 "cntlid": 13, 00:22:01.409 "qid": 0, 00:22:01.409 "state": "enabled", 00:22:01.409 "thread": "nvmf_tgt_poll_group_000", 00:22:01.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:01.409 "listen_address": { 00:22:01.409 "trtype": "RDMA", 00:22:01.409 "adrfam": "IPv4", 00:22:01.409 "traddr": "192.168.100.8", 00:22:01.409 "trsvcid": "4420" 00:22:01.409 }, 00:22:01.409 "peer_address": { 00:22:01.409 "trtype": "RDMA", 00:22:01.409 "adrfam": "IPv4", 00:22:01.409 "traddr": "192.168.100.8", 00:22:01.409 "trsvcid": "40352" 00:22:01.409 }, 00:22:01.409 "auth": { 00:22:01.409 "state": "completed", 00:22:01.409 "digest": "sha256", 00:22:01.409 "dhgroup": "ffdhe2048" 00:22:01.409 } 00:22:01.409 } 00:22:01.409 ]' 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.409 05:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.748 05:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:01.748 05:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:02.406 05:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.407 05:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:02.407 05:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.407 05:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.407 05:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.407 05:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.407 05:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:02.407 05:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.665 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.923 00:22:02.923 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.923 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.923 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.182 { 00:22:03.182 "cntlid": 15, 00:22:03.182 "qid": 0, 00:22:03.182 "state": "enabled", 00:22:03.182 "thread": "nvmf_tgt_poll_group_000", 00:22:03.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:03.182 "listen_address": { 00:22:03.182 "trtype": "RDMA", 00:22:03.182 "adrfam": "IPv4", 00:22:03.182 "traddr": "192.168.100.8", 00:22:03.182 "trsvcid": "4420" 00:22:03.182 }, 00:22:03.182 "peer_address": { 00:22:03.182 "trtype": "RDMA", 00:22:03.182 "adrfam": "IPv4", 00:22:03.182 "traddr": "192.168.100.8", 00:22:03.182 "trsvcid": "33660" 00:22:03.182 }, 00:22:03.182 "auth": { 00:22:03.182 "state": "completed", 00:22:03.182 "digest": "sha256", 00:22:03.182 "dhgroup": "ffdhe2048" 00:22:03.182 } 00:22:03.182 } 00:22:03.182 ]' 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.182 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.441 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:03.441 05:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:04.009 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.268 05:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.528 00:22:04.528 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.528 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.528 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.786 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.787 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.787 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.787 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.787 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.787 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.787 { 00:22:04.787 "cntlid": 17, 00:22:04.787 "qid": 0, 00:22:04.787 "state": "enabled", 00:22:04.787 "thread": "nvmf_tgt_poll_group_000", 00:22:04.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:04.787 "listen_address": { 00:22:04.787 "trtype": "RDMA", 00:22:04.787 "adrfam": "IPv4", 00:22:04.787 "traddr": "192.168.100.8", 00:22:04.787 "trsvcid": "4420" 00:22:04.787 }, 00:22:04.787 "peer_address": { 00:22:04.787 "trtype": "RDMA", 00:22:04.787 "adrfam": "IPv4", 00:22:04.787 "traddr": "192.168.100.8", 00:22:04.787 "trsvcid": "45246" 00:22:04.787 }, 00:22:04.787 "auth": { 00:22:04.787 "state": "completed", 00:22:04.787 "digest": "sha256", 00:22:04.787 "dhgroup": "ffdhe3072" 00:22:04.787 } 00:22:04.787 } 00:22:04.787 ]' 00:22:04.787 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.787 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.787 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.787 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:04.787 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.045 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.045 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.045 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.045 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:05.045 05:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.980 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.238 00:22:06.238 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.238 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.238 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.496 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.497 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.497 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.497 05:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.497 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.497 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.497 { 00:22:06.497 "cntlid": 19, 00:22:06.497 "qid": 0, 00:22:06.497 "state": "enabled", 00:22:06.497 "thread": "nvmf_tgt_poll_group_000", 00:22:06.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:06.497 "listen_address": { 00:22:06.497 "trtype": "RDMA", 00:22:06.497 "adrfam": "IPv4", 00:22:06.497 "traddr": "192.168.100.8", 00:22:06.497 "trsvcid": "4420" 00:22:06.497 }, 00:22:06.497 "peer_address": { 00:22:06.497 "trtype": "RDMA", 00:22:06.497 "adrfam": "IPv4", 00:22:06.497 "traddr": "192.168.100.8", 00:22:06.497 "trsvcid": "46165" 00:22:06.497 }, 00:22:06.497 "auth": { 00:22:06.497 "state": "completed", 00:22:06.497 "digest": "sha256", 00:22:06.497 "dhgroup": "ffdhe3072" 00:22:06.497 } 00:22:06.497 } 00:22:06.497 ]' 00:22:06.497 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.497 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:06.497 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.497 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:06.755 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.755 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.755 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.755 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.014 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:07.014 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:07.582 05:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.582 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:07.582 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.582 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.582 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.582 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.582 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:07.582 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.841 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.101 00:22:08.101 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.101 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.101 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.360 { 00:22:08.360 "cntlid": 21, 00:22:08.360 "qid": 0, 00:22:08.360 "state": "enabled", 00:22:08.360 "thread": "nvmf_tgt_poll_group_000", 00:22:08.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:08.360 "listen_address": { 00:22:08.360 "trtype": "RDMA", 00:22:08.360 "adrfam": "IPv4", 00:22:08.360 "traddr": "192.168.100.8", 00:22:08.360 "trsvcid": "4420" 00:22:08.360 }, 00:22:08.360 "peer_address": { 00:22:08.360 "trtype": "RDMA", 00:22:08.360 "adrfam": "IPv4", 00:22:08.360 "traddr": "192.168.100.8", 00:22:08.360 "trsvcid": "45356" 00:22:08.360 }, 00:22:08.360 "auth": { 00:22:08.360 "state": "completed", 00:22:08.360 "digest": "sha256", 00:22:08.360 "dhgroup": "ffdhe3072" 00:22:08.360 } 00:22:08.360 } 00:22:08.360 ]' 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.360 05:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.619 05:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:08.619 05:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:09.187 05:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.446 05:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:09.446 05:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.446 05:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.446 05:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.446 05:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.446 05:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:09.446 05:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:09.446 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:22:09.446 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.446 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:09.446 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:09.446 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.446 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.446 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:09.446 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.446 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.704 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.704 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.705 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.705 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.705 00:22:09.964 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.964 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.964 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.964 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.964 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.964 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.964 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.964 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.964 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.964 { 00:22:09.964 "cntlid": 23, 00:22:09.964 "qid": 0, 00:22:09.964 "state": "enabled", 00:22:09.964 "thread": "nvmf_tgt_poll_group_000", 00:22:09.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:09.964 "listen_address": { 00:22:09.964 "trtype": "RDMA", 00:22:09.964 "adrfam": "IPv4", 00:22:09.964 "traddr": "192.168.100.8", 00:22:09.964 "trsvcid": "4420" 00:22:09.964 }, 00:22:09.964 "peer_address": { 00:22:09.964 "trtype": "RDMA", 00:22:09.964 "adrfam": "IPv4", 00:22:09.964 "traddr": "192.168.100.8", 00:22:09.964 "trsvcid": "32921" 00:22:09.964 }, 00:22:09.964 "auth": { 00:22:09.964 "state": "completed", 00:22:09.964 "digest": "sha256", 00:22:09.964 "dhgroup": "ffdhe3072" 00:22:09.964 } 00:22:09.964 } 00:22:09.964 ]' 00:22:09.964 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.222 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:10.222 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.222 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:10.222 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.222 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.222 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.223 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.481 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:10.481 05:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:11.048 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.048 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:11.048 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.048 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.048 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.048 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.048 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.048 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:11.048 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.307 05:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.565 00:22:11.565 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.566 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.566 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.824 { 00:22:11.824 "cntlid": 25, 00:22:11.824 "qid": 0, 00:22:11.824 "state": "enabled", 00:22:11.824 "thread": "nvmf_tgt_poll_group_000", 00:22:11.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:11.824 "listen_address": { 00:22:11.824 "trtype": "RDMA", 00:22:11.824 "adrfam": "IPv4", 00:22:11.824 "traddr": "192.168.100.8", 00:22:11.824 "trsvcid": "4420" 00:22:11.824 }, 00:22:11.824 "peer_address": { 00:22:11.824 "trtype": "RDMA", 00:22:11.824 "adrfam": "IPv4", 00:22:11.824 "traddr": "192.168.100.8", 00:22:11.824 "trsvcid": "38226" 00:22:11.824 }, 00:22:11.824 "auth": { 00:22:11.824 "state": "completed", 00:22:11.824 "digest": "sha256", 00:22:11.824 "dhgroup": "ffdhe4096" 00:22:11.824 } 00:22:11.824 } 00:22:11.824 ]' 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:11.824 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.083 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.083 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.083 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.083 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:12.083 05:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:12.649 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.907 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:12.907 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.908 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.908 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.908 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.908 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:12.908 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.166 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.425 00:22:13.425 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.425 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.425 05:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.685 { 00:22:13.685 "cntlid": 27, 00:22:13.685 "qid": 0, 00:22:13.685 "state": "enabled", 00:22:13.685 "thread": "nvmf_tgt_poll_group_000", 00:22:13.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:13.685 "listen_address": { 00:22:13.685 "trtype": "RDMA", 00:22:13.685 "adrfam": "IPv4", 00:22:13.685 "traddr": "192.168.100.8", 00:22:13.685 "trsvcid": "4420" 00:22:13.685 }, 00:22:13.685 "peer_address": { 00:22:13.685 "trtype": "RDMA", 00:22:13.685 "adrfam": "IPv4", 00:22:13.685 "traddr": "192.168.100.8", 00:22:13.685 "trsvcid": "47896" 00:22:13.685 }, 00:22:13.685 "auth": { 00:22:13.685 "state": "completed", 00:22:13.685 "digest": "sha256", 00:22:13.685 "dhgroup": "ffdhe4096" 00:22:13.685 } 00:22:13.685 } 00:22:13.685 ]' 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.685 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.944 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:13.944 05:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:14.511 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.771 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.339 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.339 { 00:22:15.339 "cntlid": 29, 00:22:15.339 "qid": 0, 00:22:15.339 "state": "enabled", 00:22:15.339 "thread": "nvmf_tgt_poll_group_000", 00:22:15.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:15.339 "listen_address": { 00:22:15.339 "trtype": "RDMA", 00:22:15.339 "adrfam": "IPv4", 00:22:15.339 "traddr": "192.168.100.8", 00:22:15.339 "trsvcid": "4420" 00:22:15.339 }, 00:22:15.339 "peer_address": { 00:22:15.339 "trtype": "RDMA", 00:22:15.339 "adrfam": "IPv4", 00:22:15.339 "traddr": "192.168.100.8", 00:22:15.339 "trsvcid": "52265" 00:22:15.339 }, 00:22:15.339 "auth": { 00:22:15.339 "state": "completed", 00:22:15.339 "digest": "sha256", 00:22:15.339 "dhgroup": "ffdhe4096" 00:22:15.339 } 00:22:15.339 } 00:22:15.339 ]' 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.339 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:15.340 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.598 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:15.598 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.598 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.598 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.598 05:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.857 05:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:15.857 05:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:16.425 05:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.425 05:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:16.425 05:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.425 05:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.425 05:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.425 05:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.425 05:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:16.425 05:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.684 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.943 00:22:16.943 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.943 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.943 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.201 { 00:22:17.201 "cntlid": 31, 00:22:17.201 "qid": 0, 00:22:17.201 "state": "enabled", 00:22:17.201 "thread": "nvmf_tgt_poll_group_000", 00:22:17.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:17.201 "listen_address": { 00:22:17.201 "trtype": "RDMA", 00:22:17.201 "adrfam": "IPv4", 00:22:17.201 "traddr": "192.168.100.8", 00:22:17.201 "trsvcid": "4420" 00:22:17.201 }, 00:22:17.201 "peer_address": { 00:22:17.201 "trtype": "RDMA", 00:22:17.201 "adrfam": "IPv4", 00:22:17.201 "traddr": "192.168.100.8", 00:22:17.201 "trsvcid": "43434" 00:22:17.201 }, 00:22:17.201 "auth": { 00:22:17.201 "state": "completed", 00:22:17.201 "digest": "sha256", 00:22:17.201 "dhgroup": "ffdhe4096" 00:22:17.201 } 00:22:17.201 } 00:22:17.201 ]' 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.201 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.459 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:17.459 05:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:18.028 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.287 05:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.855 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.855 { 00:22:18.855 "cntlid": 33, 00:22:18.855 "qid": 0, 00:22:18.855 "state": "enabled", 00:22:18.855 "thread": "nvmf_tgt_poll_group_000", 00:22:18.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:18.855 "listen_address": { 00:22:18.855 "trtype": "RDMA", 00:22:18.855 "adrfam": "IPv4", 00:22:18.855 "traddr": "192.168.100.8", 00:22:18.855 "trsvcid": "4420" 00:22:18.855 }, 00:22:18.855 "peer_address": { 00:22:18.855 "trtype": "RDMA", 00:22:18.855 "adrfam": "IPv4", 00:22:18.855 "traddr": "192.168.100.8", 00:22:18.855 "trsvcid": "55545" 00:22:18.855 }, 00:22:18.855 "auth": { 00:22:18.855 "state": "completed", 00:22:18.855 "digest": "sha256", 00:22:18.855 "dhgroup": "ffdhe6144" 00:22:18.855 } 00:22:18.855 } 00:22:18.855 ]' 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:18.855 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.115 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:19.115 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.115 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.115 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.115 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.115 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:19.115 05:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.070 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.638 00:22:20.638 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.638 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.638 05:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.638 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.638 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.638 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.638 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.638 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.638 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.638 { 00:22:20.638 "cntlid": 35, 00:22:20.638 "qid": 0, 00:22:20.638 "state": "enabled", 00:22:20.638 "thread": "nvmf_tgt_poll_group_000", 00:22:20.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:20.638 "listen_address": { 00:22:20.638 "trtype": "RDMA", 00:22:20.638 "adrfam": "IPv4", 00:22:20.638 "traddr": "192.168.100.8", 00:22:20.638 "trsvcid": "4420" 00:22:20.638 }, 00:22:20.638 "peer_address": { 00:22:20.638 "trtype": "RDMA", 00:22:20.638 "adrfam": "IPv4", 00:22:20.638 "traddr": "192.168.100.8", 00:22:20.638 "trsvcid": "49954" 00:22:20.638 }, 00:22:20.638 "auth": { 00:22:20.638 "state": "completed", 00:22:20.638 "digest": "sha256", 00:22:20.638 "dhgroup": "ffdhe6144" 00:22:20.638 } 00:22:20.638 } 00:22:20.638 ]' 00:22:20.638 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.638 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:20.638 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.897 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:20.897 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.897 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.897 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.897 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.156 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:21.156 05:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:21.721 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.721 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:21.721 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.721 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.721 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.721 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.721 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:21.721 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.980 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.238 00:22:22.238 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.238 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.238 05:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.496 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.496 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.496 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.496 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.496 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.496 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.496 { 00:22:22.496 "cntlid": 37, 00:22:22.496 "qid": 0, 00:22:22.496 "state": "enabled", 00:22:22.496 "thread": "nvmf_tgt_poll_group_000", 00:22:22.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:22.496 "listen_address": { 00:22:22.496 "trtype": "RDMA", 00:22:22.496 "adrfam": "IPv4", 00:22:22.496 "traddr": "192.168.100.8", 00:22:22.496 "trsvcid": "4420" 00:22:22.496 }, 00:22:22.496 "peer_address": { 00:22:22.496 "trtype": "RDMA", 00:22:22.496 "adrfam": "IPv4", 00:22:22.496 "traddr": "192.168.100.8", 00:22:22.496 "trsvcid": "55614" 00:22:22.496 }, 00:22:22.496 "auth": { 00:22:22.496 "state": "completed", 00:22:22.496 "digest": "sha256", 00:22:22.496 "dhgroup": "ffdhe6144" 00:22:22.496 } 00:22:22.496 } 00:22:22.496 ]' 00:22:22.496 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.496 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:22.496 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.753 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:22.753 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.753 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.753 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.753 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.011 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:23.011 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:23.576 05:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.576 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:23.576 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.576 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.576 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.576 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.576 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:23.576 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.834 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:23.835 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.835 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.093 00:22:24.093 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.093 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.093 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.352 { 00:22:24.352 "cntlid": 39, 00:22:24.352 "qid": 0, 00:22:24.352 "state": "enabled", 00:22:24.352 "thread": "nvmf_tgt_poll_group_000", 00:22:24.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:24.352 "listen_address": { 00:22:24.352 "trtype": "RDMA", 00:22:24.352 "adrfam": "IPv4", 00:22:24.352 "traddr": "192.168.100.8", 00:22:24.352 "trsvcid": "4420" 00:22:24.352 }, 00:22:24.352 "peer_address": { 00:22:24.352 "trtype": "RDMA", 00:22:24.352 "adrfam": "IPv4", 00:22:24.352 "traddr": "192.168.100.8", 00:22:24.352 "trsvcid": "50806" 00:22:24.352 }, 00:22:24.352 "auth": { 00:22:24.352 "state": "completed", 00:22:24.352 "digest": "sha256", 00:22:24.352 "dhgroup": "ffdhe6144" 00:22:24.352 } 00:22:24.352 } 00:22:24.352 ]' 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:24.352 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.611 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.611 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.611 05:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.611 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:24.611 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:25.549 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.550 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:25.550 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.550 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.550 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.550 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.550 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.550 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:25.550 05:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.550 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.118 00:22:26.118 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.118 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.118 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.377 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.377 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.377 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.377 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.378 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.378 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.378 { 00:22:26.378 "cntlid": 41, 00:22:26.378 "qid": 0, 00:22:26.378 "state": "enabled", 00:22:26.378 "thread": "nvmf_tgt_poll_group_000", 00:22:26.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:26.378 "listen_address": { 00:22:26.378 "trtype": "RDMA", 00:22:26.378 "adrfam": "IPv4", 00:22:26.378 "traddr": "192.168.100.8", 00:22:26.378 "trsvcid": "4420" 00:22:26.378 }, 00:22:26.378 "peer_address": { 00:22:26.378 "trtype": "RDMA", 00:22:26.378 "adrfam": "IPv4", 00:22:26.378 "traddr": "192.168.100.8", 00:22:26.378 "trsvcid": "38216" 00:22:26.378 }, 00:22:26.378 "auth": { 00:22:26.378 "state": "completed", 00:22:26.378 "digest": "sha256", 00:22:26.378 "dhgroup": "ffdhe8192" 00:22:26.378 } 00:22:26.378 } 00:22:26.378 ]' 00:22:26.378 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.378 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:26.378 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.378 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.378 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.378 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.378 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.378 05:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.636 05:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:26.636 05:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:27.201 05:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.461 05:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:27.461 05:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.461 05:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.461 05:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.461 05:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.461 05:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:27.461 05:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:27.461 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.720 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.978 00:22:27.978 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.978 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.978 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.236 { 00:22:28.236 "cntlid": 43, 00:22:28.236 "qid": 0, 00:22:28.236 "state": "enabled", 00:22:28.236 "thread": "nvmf_tgt_poll_group_000", 00:22:28.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:28.236 "listen_address": { 00:22:28.236 "trtype": "RDMA", 00:22:28.236 "adrfam": "IPv4", 00:22:28.236 "traddr": "192.168.100.8", 00:22:28.236 "trsvcid": "4420" 00:22:28.236 }, 00:22:28.236 "peer_address": { 00:22:28.236 "trtype": "RDMA", 00:22:28.236 "adrfam": "IPv4", 00:22:28.236 "traddr": "192.168.100.8", 00:22:28.236 "trsvcid": "58947" 00:22:28.236 }, 00:22:28.236 "auth": { 00:22:28.236 "state": "completed", 00:22:28.236 "digest": "sha256", 00:22:28.236 "dhgroup": "ffdhe8192" 00:22:28.236 } 00:22:28.236 } 00:22:28.236 ]' 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.236 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.494 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.494 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.494 05:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.494 05:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:28.495 05:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:29.427 05:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.427 05:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:29.427 05:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.427 05:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.427 05:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.427 05:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.427 05:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:29.427 05:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:29.427 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:29.427 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.427 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:29.427 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:29.427 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:29.427 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.427 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.427 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.427 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.685 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.685 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.685 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.685 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.963 00:22:29.963 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.963 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.963 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.221 { 00:22:30.221 "cntlid": 45, 00:22:30.221 "qid": 0, 00:22:30.221 "state": "enabled", 00:22:30.221 "thread": "nvmf_tgt_poll_group_000", 00:22:30.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:30.221 "listen_address": { 00:22:30.221 "trtype": "RDMA", 00:22:30.221 "adrfam": "IPv4", 00:22:30.221 "traddr": "192.168.100.8", 00:22:30.221 "trsvcid": "4420" 00:22:30.221 }, 00:22:30.221 "peer_address": { 00:22:30.221 "trtype": "RDMA", 00:22:30.221 "adrfam": "IPv4", 00:22:30.221 "traddr": "192.168.100.8", 00:22:30.221 "trsvcid": "47884" 00:22:30.221 }, 00:22:30.221 "auth": { 00:22:30.221 "state": "completed", 00:22:30.221 "digest": "sha256", 00:22:30.221 "dhgroup": "ffdhe8192" 00:22:30.221 } 00:22:30.221 } 00:22:30.221 ]' 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.221 05:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.479 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:30.479 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:31.045 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.304 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:31.304 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.304 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.304 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.304 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.304 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:31.304 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.562 05:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.821 00:22:31.821 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.821 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.821 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.079 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.079 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.079 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.079 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.079 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.079 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.079 { 00:22:32.079 "cntlid": 47, 00:22:32.079 "qid": 0, 00:22:32.079 "state": "enabled", 00:22:32.079 "thread": "nvmf_tgt_poll_group_000", 00:22:32.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:32.079 "listen_address": { 00:22:32.079 "trtype": "RDMA", 00:22:32.079 "adrfam": "IPv4", 00:22:32.079 "traddr": "192.168.100.8", 00:22:32.079 "trsvcid": "4420" 00:22:32.079 }, 00:22:32.079 "peer_address": { 00:22:32.079 "trtype": "RDMA", 00:22:32.079 "adrfam": "IPv4", 00:22:32.079 "traddr": "192.168.100.8", 00:22:32.079 "trsvcid": "33641" 00:22:32.079 }, 00:22:32.079 "auth": { 00:22:32.079 "state": "completed", 00:22:32.079 "digest": "sha256", 00:22:32.079 "dhgroup": "ffdhe8192" 00:22:32.079 } 00:22:32.079 } 00:22:32.079 ]' 00:22:32.079 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.079 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:32.079 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.338 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.338 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.338 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.338 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.338 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.338 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:32.338 05:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.273 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.274 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.274 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.274 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.274 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.274 05:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.533 00:22:33.533 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.533 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.533 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.791 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.791 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.791 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.791 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.791 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.791 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.791 { 00:22:33.791 "cntlid": 49, 00:22:33.791 "qid": 0, 00:22:33.791 "state": "enabled", 00:22:33.791 "thread": "nvmf_tgt_poll_group_000", 00:22:33.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:33.791 "listen_address": { 00:22:33.791 "trtype": "RDMA", 00:22:33.791 "adrfam": "IPv4", 00:22:33.791 "traddr": "192.168.100.8", 00:22:33.791 "trsvcid": "4420" 00:22:33.791 }, 00:22:33.791 "peer_address": { 00:22:33.791 "trtype": "RDMA", 00:22:33.791 "adrfam": "IPv4", 00:22:33.791 "traddr": "192.168.100.8", 00:22:33.791 "trsvcid": "43430" 00:22:33.791 }, 00:22:33.791 "auth": { 00:22:33.791 "state": "completed", 00:22:33.791 "digest": "sha384", 00:22:33.791 "dhgroup": "null" 00:22:33.791 } 00:22:33.791 } 00:22:33.791 ]' 00:22:33.791 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.791 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:33.791 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.050 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:34.050 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.050 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.050 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.050 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.050 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:34.050 05:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.987 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.246 00:22:35.246 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.246 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.246 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.504 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.504 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.504 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.504 05:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.504 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.504 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.504 { 00:22:35.504 "cntlid": 51, 00:22:35.504 "qid": 0, 00:22:35.504 "state": "enabled", 00:22:35.504 "thread": "nvmf_tgt_poll_group_000", 00:22:35.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:35.504 "listen_address": { 00:22:35.504 "trtype": "RDMA", 00:22:35.504 "adrfam": "IPv4", 00:22:35.504 "traddr": "192.168.100.8", 00:22:35.504 "trsvcid": "4420" 00:22:35.504 }, 00:22:35.504 "peer_address": { 00:22:35.504 "trtype": "RDMA", 00:22:35.504 "adrfam": "IPv4", 00:22:35.504 "traddr": "192.168.100.8", 00:22:35.504 "trsvcid": "56160" 00:22:35.504 }, 00:22:35.504 "auth": { 00:22:35.504 "state": "completed", 00:22:35.504 "digest": "sha384", 00:22:35.504 "dhgroup": "null" 00:22:35.504 } 00:22:35.504 } 00:22:35.504 ]' 00:22:35.504 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.504 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:35.505 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.763 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:35.763 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.763 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.763 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.763 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.022 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:36.022 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:36.590 05:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.590 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:36.590 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.590 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.590 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.590 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.590 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:36.590 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.849 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.108 00:22:37.108 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.108 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.108 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.366 { 00:22:37.366 "cntlid": 53, 00:22:37.366 "qid": 0, 00:22:37.366 "state": "enabled", 00:22:37.366 "thread": "nvmf_tgt_poll_group_000", 00:22:37.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:37.366 "listen_address": { 00:22:37.366 "trtype": "RDMA", 00:22:37.366 "adrfam": "IPv4", 00:22:37.366 "traddr": "192.168.100.8", 00:22:37.366 "trsvcid": "4420" 00:22:37.366 }, 00:22:37.366 "peer_address": { 00:22:37.366 "trtype": "RDMA", 00:22:37.366 "adrfam": "IPv4", 00:22:37.366 "traddr": "192.168.100.8", 00:22:37.366 "trsvcid": "34304" 00:22:37.366 }, 00:22:37.366 "auth": { 00:22:37.366 "state": "completed", 00:22:37.366 "digest": "sha384", 00:22:37.366 "dhgroup": "null" 00:22:37.366 } 00:22:37.366 } 00:22:37.366 ]' 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.366 05:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.624 05:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:37.624 05:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:38.192 05:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.451 05:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:38.452 05:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.452 05:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.452 05:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.452 05:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.452 05:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:38.452 05:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:38.452 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:38.711 00:22:38.711 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.711 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.711 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.971 { 00:22:38.971 "cntlid": 55, 00:22:38.971 "qid": 0, 00:22:38.971 "state": "enabled", 00:22:38.971 "thread": "nvmf_tgt_poll_group_000", 00:22:38.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:38.971 "listen_address": { 00:22:38.971 "trtype": "RDMA", 00:22:38.971 "adrfam": "IPv4", 00:22:38.971 "traddr": "192.168.100.8", 00:22:38.971 "trsvcid": "4420" 00:22:38.971 }, 00:22:38.971 "peer_address": { 00:22:38.971 "trtype": "RDMA", 00:22:38.971 "adrfam": "IPv4", 00:22:38.971 "traddr": "192.168.100.8", 00:22:38.971 "trsvcid": "37708" 00:22:38.971 }, 00:22:38.971 "auth": { 00:22:38.971 "state": "completed", 00:22:38.971 "digest": "sha384", 00:22:38.971 "dhgroup": "null" 00:22:38.971 } 00:22:38.971 } 00:22:38.971 ]' 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:38.971 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.230 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.230 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.230 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.230 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:39.230 05:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.166 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.425 00:22:40.425 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.425 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.425 05:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.684 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.684 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.684 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.684 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.684 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.684 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.684 { 00:22:40.684 "cntlid": 57, 00:22:40.684 "qid": 0, 00:22:40.684 "state": "enabled", 00:22:40.684 "thread": "nvmf_tgt_poll_group_000", 00:22:40.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:40.684 "listen_address": { 00:22:40.684 "trtype": "RDMA", 00:22:40.684 "adrfam": "IPv4", 00:22:40.684 "traddr": "192.168.100.8", 00:22:40.684 "trsvcid": "4420" 00:22:40.684 }, 00:22:40.684 "peer_address": { 00:22:40.684 "trtype": "RDMA", 00:22:40.684 "adrfam": "IPv4", 00:22:40.684 "traddr": "192.168.100.8", 00:22:40.684 "trsvcid": "35007" 00:22:40.684 }, 00:22:40.684 "auth": { 00:22:40.684 "state": "completed", 00:22:40.684 "digest": "sha384", 00:22:40.684 "dhgroup": "ffdhe2048" 00:22:40.684 } 00:22:40.684 } 00:22:40.684 ]' 00:22:40.684 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.684 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.684 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.942 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:40.942 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.942 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.942 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.942 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.943 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:40.943 05:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.876 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.133 00:22:42.133 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.133 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.133 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.390 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.390 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.390 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.390 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.390 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.390 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.390 { 00:22:42.390 "cntlid": 59, 00:22:42.390 "qid": 0, 00:22:42.390 "state": "enabled", 00:22:42.390 "thread": "nvmf_tgt_poll_group_000", 00:22:42.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:42.390 "listen_address": { 00:22:42.390 "trtype": "RDMA", 00:22:42.390 "adrfam": "IPv4", 00:22:42.390 "traddr": "192.168.100.8", 00:22:42.390 "trsvcid": "4420" 00:22:42.390 }, 00:22:42.390 "peer_address": { 00:22:42.390 "trtype": "RDMA", 00:22:42.390 "adrfam": "IPv4", 00:22:42.390 "traddr": "192.168.100.8", 00:22:42.390 "trsvcid": "50651" 00:22:42.390 }, 00:22:42.390 "auth": { 00:22:42.390 "state": "completed", 00:22:42.390 "digest": "sha384", 00:22:42.390 "dhgroup": "ffdhe2048" 00:22:42.390 } 00:22:42.390 } 00:22:42.390 ]' 00:22:42.390 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.390 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:42.390 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.390 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:42.648 05:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.648 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.648 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.648 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.906 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:42.906 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:43.472 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.472 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:43.472 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.472 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.472 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.472 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.472 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:43.472 05:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.731 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.989 00:22:43.989 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.989 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.989 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.248 { 00:22:44.248 "cntlid": 61, 00:22:44.248 "qid": 0, 00:22:44.248 "state": "enabled", 00:22:44.248 "thread": "nvmf_tgt_poll_group_000", 00:22:44.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:44.248 "listen_address": { 00:22:44.248 "trtype": "RDMA", 00:22:44.248 "adrfam": "IPv4", 00:22:44.248 "traddr": "192.168.100.8", 00:22:44.248 "trsvcid": "4420" 00:22:44.248 }, 00:22:44.248 "peer_address": { 00:22:44.248 "trtype": "RDMA", 00:22:44.248 "adrfam": "IPv4", 00:22:44.248 "traddr": "192.168.100.8", 00:22:44.248 "trsvcid": "57447" 00:22:44.248 }, 00:22:44.248 "auth": { 00:22:44.248 "state": "completed", 00:22:44.248 "digest": "sha384", 00:22:44.248 "dhgroup": "ffdhe2048" 00:22:44.248 } 00:22:44.248 } 00:22:44.248 ]' 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.248 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.507 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:44.507 05:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:45.072 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.330 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:45.330 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.330 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.330 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.330 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.330 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:45.330 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.589 05:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.589 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.847 { 00:22:45.847 "cntlid": 63, 00:22:45.847 "qid": 0, 00:22:45.847 "state": "enabled", 00:22:45.847 "thread": "nvmf_tgt_poll_group_000", 00:22:45.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:45.847 "listen_address": { 00:22:45.847 "trtype": "RDMA", 00:22:45.847 "adrfam": "IPv4", 00:22:45.847 "traddr": "192.168.100.8", 00:22:45.847 "trsvcid": "4420" 00:22:45.847 }, 00:22:45.847 "peer_address": { 00:22:45.847 "trtype": "RDMA", 00:22:45.847 "adrfam": "IPv4", 00:22:45.847 "traddr": "192.168.100.8", 00:22:45.847 "trsvcid": "37758" 00:22:45.847 }, 00:22:45.847 "auth": { 00:22:45.847 "state": "completed", 00:22:45.847 "digest": "sha384", 00:22:45.847 "dhgroup": "ffdhe2048" 00:22:45.847 } 00:22:45.847 } 00:22:45.847 ]' 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:45.847 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.105 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:46.105 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.105 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.105 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.105 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.363 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:46.363 05:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:46.930 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.930 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:46.930 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.930 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.930 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.930 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.930 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.930 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.930 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.189 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.447 00:22:47.447 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.447 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.447 05:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.706 { 00:22:47.706 "cntlid": 65, 00:22:47.706 "qid": 0, 00:22:47.706 "state": "enabled", 00:22:47.706 "thread": "nvmf_tgt_poll_group_000", 00:22:47.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:47.706 "listen_address": { 00:22:47.706 "trtype": "RDMA", 00:22:47.706 "adrfam": "IPv4", 00:22:47.706 "traddr": "192.168.100.8", 00:22:47.706 "trsvcid": "4420" 00:22:47.706 }, 00:22:47.706 "peer_address": { 00:22:47.706 "trtype": "RDMA", 00:22:47.706 "adrfam": "IPv4", 00:22:47.706 "traddr": "192.168.100.8", 00:22:47.706 "trsvcid": "36890" 00:22:47.706 }, 00:22:47.706 "auth": { 00:22:47.706 "state": "completed", 00:22:47.706 "digest": "sha384", 00:22:47.706 "dhgroup": "ffdhe3072" 00:22:47.706 } 00:22:47.706 } 00:22:47.706 ]' 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.706 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.964 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:47.964 05:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:48.529 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.788 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.049 00:22:49.049 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.049 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.049 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.378 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.378 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.378 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.378 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.378 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.378 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.378 { 00:22:49.378 "cntlid": 67, 00:22:49.378 "qid": 0, 00:22:49.378 "state": "enabled", 00:22:49.378 "thread": "nvmf_tgt_poll_group_000", 00:22:49.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:49.378 "listen_address": { 00:22:49.378 "trtype": "RDMA", 00:22:49.378 "adrfam": "IPv4", 00:22:49.378 "traddr": "192.168.100.8", 00:22:49.378 "trsvcid": "4420" 00:22:49.378 }, 00:22:49.378 "peer_address": { 00:22:49.378 "trtype": "RDMA", 00:22:49.378 "adrfam": "IPv4", 00:22:49.378 "traddr": "192.168.100.8", 00:22:49.378 "trsvcid": "40967" 00:22:49.379 }, 00:22:49.379 "auth": { 00:22:49.379 "state": "completed", 00:22:49.379 "digest": "sha384", 00:22:49.379 "dhgroup": "ffdhe3072" 00:22:49.379 } 00:22:49.379 } 00:22:49.379 ]' 00:22:49.379 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.379 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:49.379 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.379 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:49.379 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.694 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.695 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.695 05:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.695 05:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:49.695 05:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:50.293 05:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.551 05:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:50.551 05:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.551 05:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.551 05:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.551 05:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.551 05:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:50.551 05:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:50.551 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:50.551 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.551 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:50.551 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:50.551 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:50.551 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.551 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.552 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.552 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.810 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.810 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.810 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.810 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.810 00:22:51.068 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.068 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.068 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.068 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.068 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.068 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.068 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.068 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.068 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.068 { 00:22:51.068 "cntlid": 69, 00:22:51.068 "qid": 0, 00:22:51.068 "state": "enabled", 00:22:51.068 "thread": "nvmf_tgt_poll_group_000", 00:22:51.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:51.068 "listen_address": { 00:22:51.069 "trtype": "RDMA", 00:22:51.069 "adrfam": "IPv4", 00:22:51.069 "traddr": "192.168.100.8", 00:22:51.069 "trsvcid": "4420" 00:22:51.069 }, 00:22:51.069 "peer_address": { 00:22:51.069 "trtype": "RDMA", 00:22:51.069 "adrfam": "IPv4", 00:22:51.069 "traddr": "192.168.100.8", 00:22:51.069 "trsvcid": "43289" 00:22:51.069 }, 00:22:51.069 "auth": { 00:22:51.069 "state": "completed", 00:22:51.069 "digest": "sha384", 00:22:51.069 "dhgroup": "ffdhe3072" 00:22:51.069 } 00:22:51.069 } 00:22:51.069 ]' 00:22:51.069 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.326 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:51.326 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.326 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:51.326 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.326 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.326 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.326 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.584 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:51.584 05:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:52.150 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.150 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:52.150 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.150 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.150 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.150 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.150 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:52.150 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.408 05:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.666 00:22:52.666 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.666 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.666 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.927 { 00:22:52.927 "cntlid": 71, 00:22:52.927 "qid": 0, 00:22:52.927 "state": "enabled", 00:22:52.927 "thread": "nvmf_tgt_poll_group_000", 00:22:52.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:52.927 "listen_address": { 00:22:52.927 "trtype": "RDMA", 00:22:52.927 "adrfam": "IPv4", 00:22:52.927 "traddr": "192.168.100.8", 00:22:52.927 "trsvcid": "4420" 00:22:52.927 }, 00:22:52.927 "peer_address": { 00:22:52.927 "trtype": "RDMA", 00:22:52.927 "adrfam": "IPv4", 00:22:52.927 "traddr": "192.168.100.8", 00:22:52.927 "trsvcid": "35538" 00:22:52.927 }, 00:22:52.927 "auth": { 00:22:52.927 "state": "completed", 00:22:52.927 "digest": "sha384", 00:22:52.927 "dhgroup": "ffdhe3072" 00:22:52.927 } 00:22:52.927 } 00:22:52.927 ]' 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.927 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.186 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:53.186 05:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:22:53.753 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.012 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:54.012 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.012 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.012 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.012 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.012 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.012 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:54.012 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:54.272 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:54.272 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.272 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:54.272 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:54.272 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:54.272 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.272 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.272 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.273 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.273 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.273 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.273 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.273 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.532 00:22:54.532 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.532 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.532 05:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.792 { 00:22:54.792 "cntlid": 73, 00:22:54.792 "qid": 0, 00:22:54.792 "state": "enabled", 00:22:54.792 "thread": "nvmf_tgt_poll_group_000", 00:22:54.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:54.792 "listen_address": { 00:22:54.792 "trtype": "RDMA", 00:22:54.792 "adrfam": "IPv4", 00:22:54.792 "traddr": "192.168.100.8", 00:22:54.792 "trsvcid": "4420" 00:22:54.792 }, 00:22:54.792 "peer_address": { 00:22:54.792 "trtype": "RDMA", 00:22:54.792 "adrfam": "IPv4", 00:22:54.792 "traddr": "192.168.100.8", 00:22:54.792 "trsvcid": "44366" 00:22:54.792 }, 00:22:54.792 "auth": { 00:22:54.792 "state": "completed", 00:22:54.792 "digest": "sha384", 00:22:54.792 "dhgroup": "ffdhe4096" 00:22:54.792 } 00:22:54.792 } 00:22:54.792 ]' 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.792 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.050 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:55.051 05:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:22:55.618 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.877 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.136 00:22:56.136 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.136 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.136 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.396 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.396 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.396 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.396 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.396 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.396 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.396 { 00:22:56.396 "cntlid": 75, 00:22:56.396 "qid": 0, 00:22:56.396 "state": "enabled", 00:22:56.396 "thread": "nvmf_tgt_poll_group_000", 00:22:56.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:56.396 "listen_address": { 00:22:56.396 "trtype": "RDMA", 00:22:56.396 "adrfam": "IPv4", 00:22:56.396 "traddr": "192.168.100.8", 00:22:56.396 "trsvcid": "4420" 00:22:56.396 }, 00:22:56.396 "peer_address": { 00:22:56.396 "trtype": "RDMA", 00:22:56.396 "adrfam": "IPv4", 00:22:56.396 "traddr": "192.168.100.8", 00:22:56.396 "trsvcid": "54824" 00:22:56.396 }, 00:22:56.396 "auth": { 00:22:56.396 "state": "completed", 00:22:56.396 "digest": "sha384", 00:22:56.396 "dhgroup": "ffdhe4096" 00:22:56.396 } 00:22:56.396 } 00:22:56.396 ]' 00:22:56.396 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.396 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:56.396 05:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.656 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:56.656 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.656 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.656 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.656 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.915 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:56.915 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:22:57.483 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.483 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:57.483 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.483 05:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.483 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.483 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.483 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:57.483 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.743 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.003 00:22:58.003 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.003 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.003 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.261 { 00:22:58.261 "cntlid": 77, 00:22:58.261 "qid": 0, 00:22:58.261 "state": "enabled", 00:22:58.261 "thread": "nvmf_tgt_poll_group_000", 00:22:58.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:58.261 "listen_address": { 00:22:58.261 "trtype": "RDMA", 00:22:58.261 "adrfam": "IPv4", 00:22:58.261 "traddr": "192.168.100.8", 00:22:58.261 "trsvcid": "4420" 00:22:58.261 }, 00:22:58.261 "peer_address": { 00:22:58.261 "trtype": "RDMA", 00:22:58.261 "adrfam": "IPv4", 00:22:58.261 "traddr": "192.168.100.8", 00:22:58.261 "trsvcid": "42529" 00:22:58.261 }, 00:22:58.261 "auth": { 00:22:58.261 "state": "completed", 00:22:58.261 "digest": "sha384", 00:22:58.261 "dhgroup": "ffdhe4096" 00:22:58.261 } 00:22:58.261 } 00:22:58.261 ]' 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.261 05:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.521 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:58.521 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:22:59.089 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.349 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:59.349 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.349 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.349 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.349 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.349 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:59.349 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.609 05:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.868 00:22:59.868 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.868 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.868 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.868 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.868 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.868 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.868 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.127 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.127 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.127 { 00:23:00.127 "cntlid": 79, 00:23:00.127 "qid": 0, 00:23:00.127 "state": "enabled", 00:23:00.127 "thread": "nvmf_tgt_poll_group_000", 00:23:00.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:00.127 "listen_address": { 00:23:00.127 "trtype": "RDMA", 00:23:00.127 "adrfam": "IPv4", 00:23:00.127 "traddr": "192.168.100.8", 00:23:00.127 "trsvcid": "4420" 00:23:00.127 }, 00:23:00.127 "peer_address": { 00:23:00.127 "trtype": "RDMA", 00:23:00.127 "adrfam": "IPv4", 00:23:00.127 "traddr": "192.168.100.8", 00:23:00.127 "trsvcid": "60558" 00:23:00.127 }, 00:23:00.127 "auth": { 00:23:00.127 "state": "completed", 00:23:00.127 "digest": "sha384", 00:23:00.127 "dhgroup": "ffdhe4096" 00:23:00.127 } 00:23:00.127 } 00:23:00.127 ]' 00:23:00.127 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.127 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:00.127 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.127 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:00.127 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.128 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.128 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.128 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.386 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:00.386 05:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:00.954 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.954 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:00.954 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.954 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.954 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.954 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.954 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.954 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:00.954 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.213 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.214 05:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.782 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.782 { 00:23:01.782 "cntlid": 81, 00:23:01.782 "qid": 0, 00:23:01.782 "state": "enabled", 00:23:01.782 "thread": "nvmf_tgt_poll_group_000", 00:23:01.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:01.782 "listen_address": { 00:23:01.782 "trtype": "RDMA", 00:23:01.782 "adrfam": "IPv4", 00:23:01.782 "traddr": "192.168.100.8", 00:23:01.782 "trsvcid": "4420" 00:23:01.782 }, 00:23:01.782 "peer_address": { 00:23:01.782 "trtype": "RDMA", 00:23:01.782 "adrfam": "IPv4", 00:23:01.782 "traddr": "192.168.100.8", 00:23:01.782 "trsvcid": "42083" 00:23:01.782 }, 00:23:01.782 "auth": { 00:23:01.782 "state": "completed", 00:23:01.782 "digest": "sha384", 00:23:01.782 "dhgroup": "ffdhe6144" 00:23:01.782 } 00:23:01.782 } 00:23:01.782 ]' 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:01.782 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.041 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.041 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.041 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.041 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:02.041 05:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.978 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.237 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.237 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.237 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.237 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.496 00:23:03.496 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:03.496 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.496 05:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.753 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.753 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.753 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.753 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.753 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.754 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.754 { 00:23:03.754 "cntlid": 83, 00:23:03.754 "qid": 0, 00:23:03.754 "state": "enabled", 00:23:03.754 "thread": "nvmf_tgt_poll_group_000", 00:23:03.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:03.754 "listen_address": { 00:23:03.754 "trtype": "RDMA", 00:23:03.754 "adrfam": "IPv4", 00:23:03.754 "traddr": "192.168.100.8", 00:23:03.754 "trsvcid": "4420" 00:23:03.754 }, 00:23:03.754 "peer_address": { 00:23:03.754 "trtype": "RDMA", 00:23:03.754 "adrfam": "IPv4", 00:23:03.754 "traddr": "192.168.100.8", 00:23:03.754 "trsvcid": "55297" 00:23:03.754 }, 00:23:03.754 "auth": { 00:23:03.754 "state": "completed", 00:23:03.754 "digest": "sha384", 00:23:03.754 "dhgroup": "ffdhe6144" 00:23:03.754 } 00:23:03.754 } 00:23:03.754 ]' 00:23:03.754 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.754 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:03.754 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.754 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:03.754 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.754 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.754 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.754 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.012 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:04.012 05:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:04.577 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.836 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:04.836 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.836 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.836 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.836 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.836 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:04.836 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.094 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.352 00:23:05.352 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.352 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.352 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.610 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.610 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.610 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.610 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.610 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.610 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.610 { 00:23:05.610 "cntlid": 85, 00:23:05.610 "qid": 0, 00:23:05.610 "state": "enabled", 00:23:05.610 "thread": "nvmf_tgt_poll_group_000", 00:23:05.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:05.610 "listen_address": { 00:23:05.610 "trtype": "RDMA", 00:23:05.610 "adrfam": "IPv4", 00:23:05.610 "traddr": "192.168.100.8", 00:23:05.610 "trsvcid": "4420" 00:23:05.610 }, 00:23:05.610 "peer_address": { 00:23:05.610 "trtype": "RDMA", 00:23:05.610 "adrfam": "IPv4", 00:23:05.610 "traddr": "192.168.100.8", 00:23:05.610 "trsvcid": "42181" 00:23:05.610 }, 00:23:05.610 "auth": { 00:23:05.610 "state": "completed", 00:23:05.610 "digest": "sha384", 00:23:05.610 "dhgroup": "ffdhe6144" 00:23:05.610 } 00:23:05.610 } 00:23:05.610 ]' 00:23:05.610 05:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.610 05:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:05.610 05:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.610 05:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:05.610 05:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.610 05:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.610 05:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.610 05:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.868 05:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:05.868 05:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:06.433 05:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:06.692 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.259 00:23:07.259 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.259 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.259 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.259 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.259 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.259 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.259 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.259 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.259 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:07.259 { 00:23:07.259 "cntlid": 87, 00:23:07.259 "qid": 0, 00:23:07.259 "state": "enabled", 00:23:07.259 "thread": "nvmf_tgt_poll_group_000", 00:23:07.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:07.259 "listen_address": { 00:23:07.259 "trtype": "RDMA", 00:23:07.259 "adrfam": "IPv4", 00:23:07.259 "traddr": "192.168.100.8", 00:23:07.259 "trsvcid": "4420" 00:23:07.259 }, 00:23:07.259 "peer_address": { 00:23:07.259 "trtype": "RDMA", 00:23:07.259 "adrfam": "IPv4", 00:23:07.259 "traddr": "192.168.100.8", 00:23:07.259 "trsvcid": "36734" 00:23:07.259 }, 00:23:07.259 "auth": { 00:23:07.259 "state": "completed", 00:23:07.259 "digest": "sha384", 00:23:07.259 "dhgroup": "ffdhe6144" 00:23:07.259 } 00:23:07.259 } 00:23:07.259 ]' 00:23:07.259 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:07.518 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:07.518 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.518 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:07.518 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:07.518 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.518 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.518 05:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.780 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:07.780 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:08.348 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.348 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:08.348 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.348 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.348 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.348 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.348 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.348 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:08.348 05:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.607 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.608 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.176 00:23:09.176 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.176 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.176 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.435 { 00:23:09.435 "cntlid": 89, 00:23:09.435 "qid": 0, 00:23:09.435 "state": "enabled", 00:23:09.435 "thread": "nvmf_tgt_poll_group_000", 00:23:09.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:09.435 "listen_address": { 00:23:09.435 "trtype": "RDMA", 00:23:09.435 "adrfam": "IPv4", 00:23:09.435 "traddr": "192.168.100.8", 00:23:09.435 "trsvcid": "4420" 00:23:09.435 }, 00:23:09.435 "peer_address": { 00:23:09.435 "trtype": "RDMA", 00:23:09.435 "adrfam": "IPv4", 00:23:09.435 "traddr": "192.168.100.8", 00:23:09.435 "trsvcid": "39794" 00:23:09.435 }, 00:23:09.435 "auth": { 00:23:09.435 "state": "completed", 00:23:09.435 "digest": "sha384", 00:23:09.435 "dhgroup": "ffdhe8192" 00:23:09.435 } 00:23:09.435 } 00:23:09.435 ]' 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.435 05:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.694 05:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:09.694 05:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:10.261 05:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.261 05:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:10.261 05:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.261 05:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.520 05:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.520 05:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:10.521 05:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:10.521 05:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.521 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.089 00:23:11.089 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.089 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.089 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.348 { 00:23:11.348 "cntlid": 91, 00:23:11.348 "qid": 0, 00:23:11.348 "state": "enabled", 00:23:11.348 "thread": "nvmf_tgt_poll_group_000", 00:23:11.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:11.348 "listen_address": { 00:23:11.348 "trtype": "RDMA", 00:23:11.348 "adrfam": "IPv4", 00:23:11.348 "traddr": "192.168.100.8", 00:23:11.348 "trsvcid": "4420" 00:23:11.348 }, 00:23:11.348 "peer_address": { 00:23:11.348 "trtype": "RDMA", 00:23:11.348 "adrfam": "IPv4", 00:23:11.348 "traddr": "192.168.100.8", 00:23:11.348 "trsvcid": "36178" 00:23:11.348 }, 00:23:11.348 "auth": { 00:23:11.348 "state": "completed", 00:23:11.348 "digest": "sha384", 00:23:11.348 "dhgroup": "ffdhe8192" 00:23:11.348 } 00:23:11.348 } 00:23:11.348 ]' 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:11.348 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.349 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.349 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.349 05:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.608 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:11.608 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:12.177 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.436 05:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.436 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.436 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.436 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.436 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.004 00:23:13.004 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.004 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.004 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.263 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.263 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.263 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.263 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.263 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.263 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.263 { 00:23:13.263 "cntlid": 93, 00:23:13.263 "qid": 0, 00:23:13.263 "state": "enabled", 00:23:13.263 "thread": "nvmf_tgt_poll_group_000", 00:23:13.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:13.263 "listen_address": { 00:23:13.263 "trtype": "RDMA", 00:23:13.263 "adrfam": "IPv4", 00:23:13.263 "traddr": "192.168.100.8", 00:23:13.263 "trsvcid": "4420" 00:23:13.263 }, 00:23:13.263 "peer_address": { 00:23:13.263 "trtype": "RDMA", 00:23:13.264 "adrfam": "IPv4", 00:23:13.264 "traddr": "192.168.100.8", 00:23:13.264 "trsvcid": "45700" 00:23:13.264 }, 00:23:13.264 "auth": { 00:23:13.264 "state": "completed", 00:23:13.264 "digest": "sha384", 00:23:13.264 "dhgroup": "ffdhe8192" 00:23:13.264 } 00:23:13.264 } 00:23:13.264 ]' 00:23:13.264 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:13.264 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:13.264 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:13.264 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:13.264 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.264 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.264 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.264 05:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.522 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:13.522 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:14.090 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.349 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:14.349 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.349 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.349 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.349 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:14.349 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:14.349 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.609 05:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.178 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.178 { 00:23:15.178 "cntlid": 95, 00:23:15.178 "qid": 0, 00:23:15.178 "state": "enabled", 00:23:15.178 "thread": "nvmf_tgt_poll_group_000", 00:23:15.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:15.178 "listen_address": { 00:23:15.178 "trtype": "RDMA", 00:23:15.178 "adrfam": "IPv4", 00:23:15.178 "traddr": "192.168.100.8", 00:23:15.178 "trsvcid": "4420" 00:23:15.178 }, 00:23:15.178 "peer_address": { 00:23:15.178 "trtype": "RDMA", 00:23:15.178 "adrfam": "IPv4", 00:23:15.178 "traddr": "192.168.100.8", 00:23:15.178 "trsvcid": "43694" 00:23:15.178 }, 00:23:15.178 "auth": { 00:23:15.178 "state": "completed", 00:23:15.178 "digest": "sha384", 00:23:15.178 "dhgroup": "ffdhe8192" 00:23:15.178 } 00:23:15.178 } 00:23:15.178 ]' 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:15.178 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:15.437 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:15.437 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:15.437 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.437 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.437 05:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.697 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:15.697 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:16.265 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.265 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:16.265 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.265 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.265 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.265 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:16.265 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.265 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:16.265 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:16.265 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.525 05:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.784 00:23:16.784 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:16.784 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:16.784 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:17.043 { 00:23:17.043 "cntlid": 97, 00:23:17.043 "qid": 0, 00:23:17.043 "state": "enabled", 00:23:17.043 "thread": "nvmf_tgt_poll_group_000", 00:23:17.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:17.043 "listen_address": { 00:23:17.043 "trtype": "RDMA", 00:23:17.043 "adrfam": "IPv4", 00:23:17.043 "traddr": "192.168.100.8", 00:23:17.043 "trsvcid": "4420" 00:23:17.043 }, 00:23:17.043 "peer_address": { 00:23:17.043 "trtype": "RDMA", 00:23:17.043 "adrfam": "IPv4", 00:23:17.043 "traddr": "192.168.100.8", 00:23:17.043 "trsvcid": "37877" 00:23:17.043 }, 00:23:17.043 "auth": { 00:23:17.043 "state": "completed", 00:23:17.043 "digest": "sha512", 00:23:17.043 "dhgroup": "null" 00:23:17.043 } 00:23:17.043 } 00:23:17.043 ]' 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.043 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.302 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:17.302 05:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:17.871 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.871 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:17.872 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.872 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.872 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.872 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:17.872 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:17.872 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.130 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.131 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.131 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.389 00:23:18.389 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:18.389 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:18.389 05:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:18.648 { 00:23:18.648 "cntlid": 99, 00:23:18.648 "qid": 0, 00:23:18.648 "state": "enabled", 00:23:18.648 "thread": "nvmf_tgt_poll_group_000", 00:23:18.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:18.648 "listen_address": { 00:23:18.648 "trtype": "RDMA", 00:23:18.648 "adrfam": "IPv4", 00:23:18.648 "traddr": "192.168.100.8", 00:23:18.648 "trsvcid": "4420" 00:23:18.648 }, 00:23:18.648 "peer_address": { 00:23:18.648 "trtype": "RDMA", 00:23:18.648 "adrfam": "IPv4", 00:23:18.648 "traddr": "192.168.100.8", 00:23:18.648 "trsvcid": "53369" 00:23:18.648 }, 00:23:18.648 "auth": { 00:23:18.648 "state": "completed", 00:23:18.648 "digest": "sha512", 00:23:18.648 "dhgroup": "null" 00:23:18.648 } 00:23:18.648 } 00:23:18.648 ]' 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:18.648 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:18.907 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.907 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.907 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.907 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:18.907 05:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.846 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.105 00:23:20.105 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.105 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.105 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.364 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.364 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.364 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.364 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.364 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.364 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:20.364 { 00:23:20.364 "cntlid": 101, 00:23:20.364 "qid": 0, 00:23:20.364 "state": "enabled", 00:23:20.364 "thread": "nvmf_tgt_poll_group_000", 00:23:20.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:20.364 "listen_address": { 00:23:20.364 "trtype": "RDMA", 00:23:20.364 "adrfam": "IPv4", 00:23:20.364 "traddr": "192.168.100.8", 00:23:20.364 "trsvcid": "4420" 00:23:20.364 }, 00:23:20.364 "peer_address": { 00:23:20.364 "trtype": "RDMA", 00:23:20.364 "adrfam": "IPv4", 00:23:20.364 "traddr": "192.168.100.8", 00:23:20.364 "trsvcid": "43260" 00:23:20.364 }, 00:23:20.364 "auth": { 00:23:20.364 "state": "completed", 00:23:20.364 "digest": "sha512", 00:23:20.364 "dhgroup": "null" 00:23:20.364 } 00:23:20.364 } 00:23:20.364 ]' 00:23:20.364 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:20.364 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:20.364 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:20.364 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:20.623 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:20.623 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.623 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.623 05:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.623 05:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:20.623 05:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:21.559 05:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.559 05:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:21.559 05:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.559 05:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.559 05:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.559 05:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:21.559 05:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:21.559 05:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:21.559 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:21.559 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:21.559 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:21.559 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:21.559 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:21.559 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.559 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:21.559 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.559 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.560 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.560 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:21.560 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:21.560 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:21.819 00:23:21.819 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:21.819 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:21.819 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.079 { 00:23:22.079 "cntlid": 103, 00:23:22.079 "qid": 0, 00:23:22.079 "state": "enabled", 00:23:22.079 "thread": "nvmf_tgt_poll_group_000", 00:23:22.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:22.079 "listen_address": { 00:23:22.079 "trtype": "RDMA", 00:23:22.079 "adrfam": "IPv4", 00:23:22.079 "traddr": "192.168.100.8", 00:23:22.079 "trsvcid": "4420" 00:23:22.079 }, 00:23:22.079 "peer_address": { 00:23:22.079 "trtype": "RDMA", 00:23:22.079 "adrfam": "IPv4", 00:23:22.079 "traddr": "192.168.100.8", 00:23:22.079 "trsvcid": "39436" 00:23:22.079 }, 00:23:22.079 "auth": { 00:23:22.079 "state": "completed", 00:23:22.079 "digest": "sha512", 00:23:22.079 "dhgroup": "null" 00:23:22.079 } 00:23:22.079 } 00:23:22.079 ]' 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:22.079 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.339 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.339 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.339 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.339 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:22.339 05:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.276 05:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.535 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:23.796 { 00:23:23.796 "cntlid": 105, 00:23:23.796 "qid": 0, 00:23:23.796 "state": "enabled", 00:23:23.796 "thread": "nvmf_tgt_poll_group_000", 00:23:23.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:23.796 "listen_address": { 00:23:23.796 "trtype": "RDMA", 00:23:23.796 "adrfam": "IPv4", 00:23:23.796 "traddr": "192.168.100.8", 00:23:23.796 "trsvcid": "4420" 00:23:23.796 }, 00:23:23.796 "peer_address": { 00:23:23.796 "trtype": "RDMA", 00:23:23.796 "adrfam": "IPv4", 00:23:23.796 "traddr": "192.168.100.8", 00:23:23.796 "trsvcid": "56751" 00:23:23.796 }, 00:23:23.796 "auth": { 00:23:23.796 "state": "completed", 00:23:23.796 "digest": "sha512", 00:23:23.796 "dhgroup": "ffdhe2048" 00:23:23.796 } 00:23:23.796 } 00:23:23.796 ]' 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:23.796 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.055 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:24.055 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.055 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.055 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.055 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.314 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:24.314 05:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:24.880 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.880 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:24.880 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.880 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.880 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.880 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:24.880 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:24.880 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.139 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.398 00:23:25.398 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:25.398 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:25.398 05:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:25.656 { 00:23:25.656 "cntlid": 107, 00:23:25.656 "qid": 0, 00:23:25.656 "state": "enabled", 00:23:25.656 "thread": "nvmf_tgt_poll_group_000", 00:23:25.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:25.656 "listen_address": { 00:23:25.656 "trtype": "RDMA", 00:23:25.656 "adrfam": "IPv4", 00:23:25.656 "traddr": "192.168.100.8", 00:23:25.656 "trsvcid": "4420" 00:23:25.656 }, 00:23:25.656 "peer_address": { 00:23:25.656 "trtype": "RDMA", 00:23:25.656 "adrfam": "IPv4", 00:23:25.656 "traddr": "192.168.100.8", 00:23:25.656 "trsvcid": "60230" 00:23:25.656 }, 00:23:25.656 "auth": { 00:23:25.656 "state": "completed", 00:23:25.656 "digest": "sha512", 00:23:25.656 "dhgroup": "ffdhe2048" 00:23:25.656 } 00:23:25.656 } 00:23:25.656 ]' 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.656 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.915 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:25.915 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:26.482 05:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.741 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.000 00:23:27.000 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:27.000 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:27.000 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.259 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.259 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.259 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.259 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.259 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.259 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:27.259 { 00:23:27.259 "cntlid": 109, 00:23:27.259 "qid": 0, 00:23:27.259 "state": "enabled", 00:23:27.259 "thread": "nvmf_tgt_poll_group_000", 00:23:27.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:27.259 "listen_address": { 00:23:27.259 "trtype": "RDMA", 00:23:27.259 "adrfam": "IPv4", 00:23:27.259 "traddr": "192.168.100.8", 00:23:27.259 "trsvcid": "4420" 00:23:27.259 }, 00:23:27.259 "peer_address": { 00:23:27.259 "trtype": "RDMA", 00:23:27.259 "adrfam": "IPv4", 00:23:27.259 "traddr": "192.168.100.8", 00:23:27.259 "trsvcid": "45597" 00:23:27.259 }, 00:23:27.259 "auth": { 00:23:27.259 "state": "completed", 00:23:27.259 "digest": "sha512", 00:23:27.259 "dhgroup": "ffdhe2048" 00:23:27.259 } 00:23:27.259 } 00:23:27.259 ]' 00:23:27.259 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:27.259 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:27.259 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:27.518 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:27.518 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:27.518 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.518 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.518 05:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.777 05:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:27.778 05:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:28.346 05:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.346 05:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:28.346 05:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.346 05:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.346 05:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.346 05:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:28.346 05:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.346 05:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:28.606 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:28.865 00:23:28.865 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:28.865 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:28.865 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.124 { 00:23:29.124 "cntlid": 111, 00:23:29.124 "qid": 0, 00:23:29.124 "state": "enabled", 00:23:29.124 "thread": "nvmf_tgt_poll_group_000", 00:23:29.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:29.124 "listen_address": { 00:23:29.124 "trtype": "RDMA", 00:23:29.124 "adrfam": "IPv4", 00:23:29.124 "traddr": "192.168.100.8", 00:23:29.124 "trsvcid": "4420" 00:23:29.124 }, 00:23:29.124 "peer_address": { 00:23:29.124 "trtype": "RDMA", 00:23:29.124 "adrfam": "IPv4", 00:23:29.124 "traddr": "192.168.100.8", 00:23:29.124 "trsvcid": "36819" 00:23:29.124 }, 00:23:29.124 "auth": { 00:23:29.124 "state": "completed", 00:23:29.124 "digest": "sha512", 00:23:29.124 "dhgroup": "ffdhe2048" 00:23:29.124 } 00:23:29.124 } 00:23:29.124 ]' 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.124 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.383 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:29.383 05:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:29.951 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.210 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.469 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.469 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.469 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.469 05:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.469 00:23:30.728 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:30.728 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:30.728 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.729 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.729 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.729 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.729 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.729 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.729 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:30.729 { 00:23:30.729 "cntlid": 113, 00:23:30.729 "qid": 0, 00:23:30.729 "state": "enabled", 00:23:30.729 "thread": "nvmf_tgt_poll_group_000", 00:23:30.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:30.729 "listen_address": { 00:23:30.729 "trtype": "RDMA", 00:23:30.729 "adrfam": "IPv4", 00:23:30.729 "traddr": "192.168.100.8", 00:23:30.729 "trsvcid": "4420" 00:23:30.729 }, 00:23:30.729 "peer_address": { 00:23:30.729 "trtype": "RDMA", 00:23:30.729 "adrfam": "IPv4", 00:23:30.729 "traddr": "192.168.100.8", 00:23:30.729 "trsvcid": "39953" 00:23:30.729 }, 00:23:30.729 "auth": { 00:23:30.729 "state": "completed", 00:23:30.729 "digest": "sha512", 00:23:30.729 "dhgroup": "ffdhe3072" 00:23:30.729 } 00:23:30.729 } 00:23:30.729 ]' 00:23:30.729 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:30.988 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:30.988 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:30.988 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:30.988 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:30.988 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.988 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.988 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.247 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:31.247 05:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:31.815 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.815 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:31.815 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.815 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.815 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.815 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:31.815 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:31.815 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.075 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.076 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.076 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.335 00:23:32.335 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:32.335 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:32.335 05:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:32.594 { 00:23:32.594 "cntlid": 115, 00:23:32.594 "qid": 0, 00:23:32.594 "state": "enabled", 00:23:32.594 "thread": "nvmf_tgt_poll_group_000", 00:23:32.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:32.594 "listen_address": { 00:23:32.594 "trtype": "RDMA", 00:23:32.594 "adrfam": "IPv4", 00:23:32.594 "traddr": "192.168.100.8", 00:23:32.594 "trsvcid": "4420" 00:23:32.594 }, 00:23:32.594 "peer_address": { 00:23:32.594 "trtype": "RDMA", 00:23:32.594 "adrfam": "IPv4", 00:23:32.594 "traddr": "192.168.100.8", 00:23:32.594 "trsvcid": "47177" 00:23:32.594 }, 00:23:32.594 "auth": { 00:23:32.594 "state": "completed", 00:23:32.594 "digest": "sha512", 00:23:32.594 "dhgroup": "ffdhe3072" 00:23:32.594 } 00:23:32.594 } 00:23:32.594 ]' 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.594 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.852 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:32.852 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:33.419 05:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.678 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:33.678 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.678 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.678 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.678 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:33.678 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:33.678 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.937 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.196 00:23:34.196 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:34.196 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:34.196 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:34.455 { 00:23:34.455 "cntlid": 117, 00:23:34.455 "qid": 0, 00:23:34.455 "state": "enabled", 00:23:34.455 "thread": "nvmf_tgt_poll_group_000", 00:23:34.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:34.455 "listen_address": { 00:23:34.455 "trtype": "RDMA", 00:23:34.455 "adrfam": "IPv4", 00:23:34.455 "traddr": "192.168.100.8", 00:23:34.455 "trsvcid": "4420" 00:23:34.455 }, 00:23:34.455 "peer_address": { 00:23:34.455 "trtype": "RDMA", 00:23:34.455 "adrfam": "IPv4", 00:23:34.455 "traddr": "192.168.100.8", 00:23:34.455 "trsvcid": "39143" 00:23:34.455 }, 00:23:34.455 "auth": { 00:23:34.455 "state": "completed", 00:23:34.455 "digest": "sha512", 00:23:34.455 "dhgroup": "ffdhe3072" 00:23:34.455 } 00:23:34.455 } 00:23:34.455 ]' 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.455 05:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.713 05:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:34.713 05:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:35.279 05:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.279 05:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:35.279 05:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.279 05:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.279 05:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.279 05:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:35.279 05:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:35.279 05:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:35.538 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:35.796 00:23:35.796 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:35.796 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:35.796 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.054 { 00:23:36.054 "cntlid": 119, 00:23:36.054 "qid": 0, 00:23:36.054 "state": "enabled", 00:23:36.054 "thread": "nvmf_tgt_poll_group_000", 00:23:36.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:36.054 "listen_address": { 00:23:36.054 "trtype": "RDMA", 00:23:36.054 "adrfam": "IPv4", 00:23:36.054 "traddr": "192.168.100.8", 00:23:36.054 "trsvcid": "4420" 00:23:36.054 }, 00:23:36.054 "peer_address": { 00:23:36.054 "trtype": "RDMA", 00:23:36.054 "adrfam": "IPv4", 00:23:36.054 "traddr": "192.168.100.8", 00:23:36.054 "trsvcid": "45217" 00:23:36.054 }, 00:23:36.054 "auth": { 00:23:36.054 "state": "completed", 00:23:36.054 "digest": "sha512", 00:23:36.054 "dhgroup": "ffdhe3072" 00:23:36.054 } 00:23:36.054 } 00:23:36.054 ]' 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.054 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.313 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:36.313 05:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:36.893 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.240 05:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.580 00:23:37.580 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:37.580 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:37.580 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:37.839 { 00:23:37.839 "cntlid": 121, 00:23:37.839 "qid": 0, 00:23:37.839 "state": "enabled", 00:23:37.839 "thread": "nvmf_tgt_poll_group_000", 00:23:37.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:37.839 "listen_address": { 00:23:37.839 "trtype": "RDMA", 00:23:37.839 "adrfam": "IPv4", 00:23:37.839 "traddr": "192.168.100.8", 00:23:37.839 "trsvcid": "4420" 00:23:37.839 }, 00:23:37.839 "peer_address": { 00:23:37.839 "trtype": "RDMA", 00:23:37.839 "adrfam": "IPv4", 00:23:37.839 "traddr": "192.168.100.8", 00:23:37.839 "trsvcid": "40328" 00:23:37.839 }, 00:23:37.839 "auth": { 00:23:37.839 "state": "completed", 00:23:37.839 "digest": "sha512", 00:23:37.839 "dhgroup": "ffdhe4096" 00:23:37.839 } 00:23:37.839 } 00:23:37.839 ]' 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:37.839 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.098 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:38.098 05:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:38.666 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:38.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:38.926 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:38.926 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.926 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.926 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.926 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:38.926 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:38.926 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.185 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.444 00:23:39.444 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:39.444 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:39.444 05:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.444 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.444 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.444 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.444 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.703 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.703 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:39.703 { 00:23:39.703 "cntlid": 123, 00:23:39.703 "qid": 0, 00:23:39.703 "state": "enabled", 00:23:39.703 "thread": "nvmf_tgt_poll_group_000", 00:23:39.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:39.703 "listen_address": { 00:23:39.703 "trtype": "RDMA", 00:23:39.703 "adrfam": "IPv4", 00:23:39.703 "traddr": "192.168.100.8", 00:23:39.703 "trsvcid": "4420" 00:23:39.703 }, 00:23:39.703 "peer_address": { 00:23:39.703 "trtype": "RDMA", 00:23:39.703 "adrfam": "IPv4", 00:23:39.703 "traddr": "192.168.100.8", 00:23:39.703 "trsvcid": "33895" 00:23:39.703 }, 00:23:39.703 "auth": { 00:23:39.703 "state": "completed", 00:23:39.703 "digest": "sha512", 00:23:39.704 "dhgroup": "ffdhe4096" 00:23:39.704 } 00:23:39.704 } 00:23:39.704 ]' 00:23:39.704 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:39.704 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:39.704 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:39.704 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:39.704 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:39.704 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.704 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.704 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.962 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:39.962 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:40.530 05:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.530 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:40.530 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.530 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.530 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.530 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:40.530 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.530 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.790 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.048 00:23:41.048 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:41.048 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:41.048 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:41.306 { 00:23:41.306 "cntlid": 125, 00:23:41.306 "qid": 0, 00:23:41.306 "state": "enabled", 00:23:41.306 "thread": "nvmf_tgt_poll_group_000", 00:23:41.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:41.306 "listen_address": { 00:23:41.306 "trtype": "RDMA", 00:23:41.306 "adrfam": "IPv4", 00:23:41.306 "traddr": "192.168.100.8", 00:23:41.306 "trsvcid": "4420" 00:23:41.306 }, 00:23:41.306 "peer_address": { 00:23:41.306 "trtype": "RDMA", 00:23:41.306 "adrfam": "IPv4", 00:23:41.306 "traddr": "192.168.100.8", 00:23:41.306 "trsvcid": "57137" 00:23:41.306 }, 00:23:41.306 "auth": { 00:23:41.306 "state": "completed", 00:23:41.306 "digest": "sha512", 00:23:41.306 "dhgroup": "ffdhe4096" 00:23:41.306 } 00:23:41.306 } 00:23:41.306 ]' 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.306 05:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.566 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:41.566 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:42.134 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.393 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:42.393 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.393 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.393 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.393 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:42.393 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.394 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:42.653 05:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:42.912 00:23:42.912 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:42.912 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:42.912 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.912 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.912 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.912 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.912 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.912 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.912 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:42.912 { 00:23:42.912 "cntlid": 127, 00:23:42.912 "qid": 0, 00:23:42.912 "state": "enabled", 00:23:42.912 "thread": "nvmf_tgt_poll_group_000", 00:23:42.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:42.912 "listen_address": { 00:23:42.912 "trtype": "RDMA", 00:23:42.912 "adrfam": "IPv4", 00:23:42.912 "traddr": "192.168.100.8", 00:23:42.912 "trsvcid": "4420" 00:23:42.912 }, 00:23:42.912 "peer_address": { 00:23:42.912 "trtype": "RDMA", 00:23:42.912 "adrfam": "IPv4", 00:23:42.912 "traddr": "192.168.100.8", 00:23:42.912 "trsvcid": "37308" 00:23:42.912 }, 00:23:42.912 "auth": { 00:23:42.912 "state": "completed", 00:23:42.912 "digest": "sha512", 00:23:42.912 "dhgroup": "ffdhe4096" 00:23:42.912 } 00:23:42.912 } 00:23:42.912 ]' 00:23:42.912 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:43.172 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:43.172 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:43.172 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:43.172 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:43.172 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.172 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.172 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.431 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:43.431 05:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:44.000 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.000 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:44.000 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.000 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.000 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.000 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.000 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:44.000 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.000 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.259 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:44.259 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:44.259 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:44.260 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:44.260 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:44.260 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.260 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.260 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.260 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.260 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.260 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.260 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.260 05:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.519 00:23:44.778 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:44.778 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:44.778 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.778 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.778 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.778 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.778 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.778 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.778 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:44.778 { 00:23:44.778 "cntlid": 129, 00:23:44.778 "qid": 0, 00:23:44.778 "state": "enabled", 00:23:44.778 "thread": "nvmf_tgt_poll_group_000", 00:23:44.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:44.778 "listen_address": { 00:23:44.778 "trtype": "RDMA", 00:23:44.778 "adrfam": "IPv4", 00:23:44.778 "traddr": "192.168.100.8", 00:23:44.778 "trsvcid": "4420" 00:23:44.778 }, 00:23:44.778 "peer_address": { 00:23:44.778 "trtype": "RDMA", 00:23:44.778 "adrfam": "IPv4", 00:23:44.778 "traddr": "192.168.100.8", 00:23:44.778 "trsvcid": "58524" 00:23:44.778 }, 00:23:44.778 "auth": { 00:23:44.778 "state": "completed", 00:23:44.778 "digest": "sha512", 00:23:44.778 "dhgroup": "ffdhe6144" 00:23:44.778 } 00:23:44.778 } 00:23:44.778 ]' 00:23:44.778 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:44.779 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:44.779 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:45.038 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:45.038 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:45.038 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.038 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.038 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.296 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:45.296 05:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:45.866 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.866 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:45.866 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.866 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.866 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.866 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:45.866 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:45.866 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.126 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.385 00:23:46.385 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:46.385 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:46.385 05:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:46.644 { 00:23:46.644 "cntlid": 131, 00:23:46.644 "qid": 0, 00:23:46.644 "state": "enabled", 00:23:46.644 "thread": "nvmf_tgt_poll_group_000", 00:23:46.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:46.644 "listen_address": { 00:23:46.644 "trtype": "RDMA", 00:23:46.644 "adrfam": "IPv4", 00:23:46.644 "traddr": "192.168.100.8", 00:23:46.644 "trsvcid": "4420" 00:23:46.644 }, 00:23:46.644 "peer_address": { 00:23:46.644 "trtype": "RDMA", 00:23:46.644 "adrfam": "IPv4", 00:23:46.644 "traddr": "192.168.100.8", 00:23:46.644 "trsvcid": "34202" 00:23:46.644 }, 00:23:46.644 "auth": { 00:23:46.644 "state": "completed", 00:23:46.644 "digest": "sha512", 00:23:46.644 "dhgroup": "ffdhe6144" 00:23:46.644 } 00:23:46.644 } 00:23:46.644 ]' 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:46.644 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:46.903 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.903 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.903 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.903 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:46.903 05:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:47.469 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.728 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:47.728 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.728 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.728 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.728 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:47.728 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:47.728 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.987 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.245 00:23:48.245 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:48.245 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:48.245 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.504 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.504 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.504 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.504 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.504 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.504 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:48.504 { 00:23:48.504 "cntlid": 133, 00:23:48.504 "qid": 0, 00:23:48.504 "state": "enabled", 00:23:48.504 "thread": "nvmf_tgt_poll_group_000", 00:23:48.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:48.504 "listen_address": { 00:23:48.504 "trtype": "RDMA", 00:23:48.504 "adrfam": "IPv4", 00:23:48.504 "traddr": "192.168.100.8", 00:23:48.504 "trsvcid": "4420" 00:23:48.504 }, 00:23:48.504 "peer_address": { 00:23:48.504 "trtype": "RDMA", 00:23:48.504 "adrfam": "IPv4", 00:23:48.504 "traddr": "192.168.100.8", 00:23:48.504 "trsvcid": "35630" 00:23:48.504 }, 00:23:48.504 "auth": { 00:23:48.504 "state": "completed", 00:23:48.504 "digest": "sha512", 00:23:48.504 "dhgroup": "ffdhe6144" 00:23:48.504 } 00:23:48.504 } 00:23:48.504 ]' 00:23:48.504 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:48.504 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:48.504 05:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:48.504 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:48.504 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:48.504 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.504 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.504 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.763 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:48.763 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:49.330 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.589 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:49.589 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.589 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.589 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.589 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:49.589 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:49.589 05:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:49.589 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:49.589 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:49.589 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:49.589 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:49.589 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:49.589 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.589 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:49.589 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.589 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.848 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.848 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:49.848 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:49.849 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:50.108 00:23:50.108 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:50.108 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:50.108 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:50.367 { 00:23:50.367 "cntlid": 135, 00:23:50.367 "qid": 0, 00:23:50.367 "state": "enabled", 00:23:50.367 "thread": "nvmf_tgt_poll_group_000", 00:23:50.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:50.367 "listen_address": { 00:23:50.367 "trtype": "RDMA", 00:23:50.367 "adrfam": "IPv4", 00:23:50.367 "traddr": "192.168.100.8", 00:23:50.367 "trsvcid": "4420" 00:23:50.367 }, 00:23:50.367 "peer_address": { 00:23:50.367 "trtype": "RDMA", 00:23:50.367 "adrfam": "IPv4", 00:23:50.367 "traddr": "192.168.100.8", 00:23:50.367 "trsvcid": "49929" 00:23:50.367 }, 00:23:50.367 "auth": { 00:23:50.367 "state": "completed", 00:23:50.367 "digest": "sha512", 00:23:50.367 "dhgroup": "ffdhe6144" 00:23:50.367 } 00:23:50.367 } 00:23:50.367 ]' 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.367 05:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.626 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:50.626 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:51.194 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.454 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:51.454 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.454 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.454 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.454 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.454 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:51.454 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.454 05:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.454 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.021 00:23:52.021 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:52.021 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.021 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:52.281 { 00:23:52.281 "cntlid": 137, 00:23:52.281 "qid": 0, 00:23:52.281 "state": "enabled", 00:23:52.281 "thread": "nvmf_tgt_poll_group_000", 00:23:52.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:52.281 "listen_address": { 00:23:52.281 "trtype": "RDMA", 00:23:52.281 "adrfam": "IPv4", 00:23:52.281 "traddr": "192.168.100.8", 00:23:52.281 "trsvcid": "4420" 00:23:52.281 }, 00:23:52.281 "peer_address": { 00:23:52.281 "trtype": "RDMA", 00:23:52.281 "adrfam": "IPv4", 00:23:52.281 "traddr": "192.168.100.8", 00:23:52.281 "trsvcid": "35702" 00:23:52.281 }, 00:23:52.281 "auth": { 00:23:52.281 "state": "completed", 00:23:52.281 "digest": "sha512", 00:23:52.281 "dhgroup": "ffdhe8192" 00:23:52.281 } 00:23:52.281 } 00:23:52.281 ]' 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.281 05:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.539 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:52.539 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:23:53.117 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.376 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.634 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.634 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.634 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.634 05:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.893 00:23:53.893 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:53.893 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:53.893 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.153 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.153 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.153 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.153 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.153 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.153 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:54.153 { 00:23:54.153 "cntlid": 139, 00:23:54.153 "qid": 0, 00:23:54.153 "state": "enabled", 00:23:54.153 "thread": "nvmf_tgt_poll_group_000", 00:23:54.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:54.153 "listen_address": { 00:23:54.153 "trtype": "RDMA", 00:23:54.153 "adrfam": "IPv4", 00:23:54.153 "traddr": "192.168.100.8", 00:23:54.153 "trsvcid": "4420" 00:23:54.153 }, 00:23:54.153 "peer_address": { 00:23:54.153 "trtype": "RDMA", 00:23:54.153 "adrfam": "IPv4", 00:23:54.153 "traddr": "192.168.100.8", 00:23:54.153 "trsvcid": "59958" 00:23:54.153 }, 00:23:54.153 "auth": { 00:23:54.153 "state": "completed", 00:23:54.153 "digest": "sha512", 00:23:54.153 "dhgroup": "ffdhe8192" 00:23:54.153 } 00:23:54.153 } 00:23:54.153 ]' 00:23:54.153 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:54.153 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:54.153 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:54.412 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:54.412 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:54.412 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.412 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.412 05:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.671 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:54.671 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: --dhchap-ctrl-secret DHHC-1:02:NWY3ZjZjYWQ2ZGVlYTgyMjY4MTRhMThjMmY3OTA0M2M3NTAwZTBkMjQ0ZWFlN2I4l8mO7w==: 00:23:55.239 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.239 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:55.239 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.239 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.239 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.239 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:55.239 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:55.239 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.498 05:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.065 00:23:56.065 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:56.065 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:56.065 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.065 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.065 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:56.065 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.065 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.065 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.065 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:56.065 { 00:23:56.065 "cntlid": 141, 00:23:56.065 "qid": 0, 00:23:56.065 "state": "enabled", 00:23:56.065 "thread": "nvmf_tgt_poll_group_000", 00:23:56.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:56.065 "listen_address": { 00:23:56.065 "trtype": "RDMA", 00:23:56.065 "adrfam": "IPv4", 00:23:56.065 "traddr": "192.168.100.8", 00:23:56.065 "trsvcid": "4420" 00:23:56.065 }, 00:23:56.065 "peer_address": { 00:23:56.065 "trtype": "RDMA", 00:23:56.065 "adrfam": "IPv4", 00:23:56.065 "traddr": "192.168.100.8", 00:23:56.065 "trsvcid": "57128" 00:23:56.065 }, 00:23:56.065 "auth": { 00:23:56.065 "state": "completed", 00:23:56.065 "digest": "sha512", 00:23:56.065 "dhgroup": "ffdhe8192" 00:23:56.065 } 00:23:56.065 } 00:23:56.065 ]' 00:23:56.065 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:56.324 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:56.324 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:56.324 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:56.324 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:56.324 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.324 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.324 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.583 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:56.583 05:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:01:NzdkMGNmMmIzNGNjMTE2ODY0ZGFmNjI0ZTJhMGViMDTv/Njv: 00:23:57.150 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.150 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:57.150 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.150 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.150 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.150 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:57.150 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:57.150 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:57.409 05:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:57.977 00:23:57.977 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:57.977 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:57.977 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.977 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.977 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.977 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.977 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.235 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.236 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:58.236 { 00:23:58.236 "cntlid": 143, 00:23:58.236 "qid": 0, 00:23:58.236 "state": "enabled", 00:23:58.236 "thread": "nvmf_tgt_poll_group_000", 00:23:58.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:58.236 "listen_address": { 00:23:58.236 "trtype": "RDMA", 00:23:58.236 "adrfam": "IPv4", 00:23:58.236 "traddr": "192.168.100.8", 00:23:58.236 "trsvcid": "4420" 00:23:58.236 }, 00:23:58.236 "peer_address": { 00:23:58.236 "trtype": "RDMA", 00:23:58.236 "adrfam": "IPv4", 00:23:58.236 "traddr": "192.168.100.8", 00:23:58.236 "trsvcid": "42390" 00:23:58.236 }, 00:23:58.236 "auth": { 00:23:58.236 "state": "completed", 00:23:58.236 "digest": "sha512", 00:23:58.236 "dhgroup": "ffdhe8192" 00:23:58.236 } 00:23:58.236 } 00:23:58.236 ]' 00:23:58.236 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:58.236 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:58.236 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:58.236 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:58.236 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:58.236 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.236 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.236 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.495 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:58.495 05:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:23:59.063 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:59.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:59.063 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:59.063 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.063 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.063 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.063 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:59.063 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.322 05:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.889 00:23:59.889 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:59.889 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:59.889 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:00.149 { 00:24:00.149 "cntlid": 145, 00:24:00.149 "qid": 0, 00:24:00.149 "state": "enabled", 00:24:00.149 "thread": "nvmf_tgt_poll_group_000", 00:24:00.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:00.149 "listen_address": { 00:24:00.149 "trtype": "RDMA", 00:24:00.149 "adrfam": "IPv4", 00:24:00.149 "traddr": "192.168.100.8", 00:24:00.149 "trsvcid": "4420" 00:24:00.149 }, 00:24:00.149 "peer_address": { 00:24:00.149 "trtype": "RDMA", 00:24:00.149 "adrfam": "IPv4", 00:24:00.149 "traddr": "192.168.100.8", 00:24:00.149 "trsvcid": "42460" 00:24:00.149 }, 00:24:00.149 "auth": { 00:24:00.149 "state": "completed", 00:24:00.149 "digest": "sha512", 00:24:00.149 "dhgroup": "ffdhe8192" 00:24:00.149 } 00:24:00.149 } 00:24:00.149 ]' 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.149 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.408 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:24:00.408 05:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NWQ0NjAyOTUwZWI0YTYxYjFiODYxZWRiMzRkZWE5NGYyYTFkYjY1YzVhNTc2OWI3v+Zzeg==: --dhchap-ctrl-secret DHHC-1:03:ODgwNmJkMTc2ZmZjN2JkZjQ2MWU3YWIzNmJmMDI4M2UyMjU0ZGEwOTI2NWI2ODc4NzdjMWQyMzM5ZWMyY2U4ZH5c7Ys=: 00:24:00.975 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:01.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:01.234 05:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:01.802 request: 00:24:01.802 { 00:24:01.802 "name": "nvme0", 00:24:01.802 "trtype": "rdma", 00:24:01.802 "traddr": "192.168.100.8", 00:24:01.802 "adrfam": "ipv4", 00:24:01.802 "trsvcid": "4420", 00:24:01.802 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:01.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:01.802 "prchk_reftag": false, 00:24:01.802 "prchk_guard": false, 00:24:01.802 "hdgst": false, 00:24:01.802 "ddgst": false, 00:24:01.802 "dhchap_key": "key2", 00:24:01.802 "allow_unrecognized_csi": false, 00:24:01.802 "method": "bdev_nvme_attach_controller", 00:24:01.802 "req_id": 1 00:24:01.802 } 00:24:01.802 Got JSON-RPC error response 00:24:01.802 response: 00:24:01.802 { 00:24:01.802 "code": -5, 00:24:01.802 "message": "Input/output error" 00:24:01.802 } 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:01.802 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:02.061 request: 00:24:02.061 { 00:24:02.061 "name": "nvme0", 00:24:02.061 "trtype": "rdma", 00:24:02.061 "traddr": "192.168.100.8", 00:24:02.061 "adrfam": "ipv4", 00:24:02.061 "trsvcid": "4420", 00:24:02.061 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:02.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:02.061 "prchk_reftag": false, 00:24:02.061 "prchk_guard": false, 00:24:02.061 "hdgst": false, 00:24:02.061 "ddgst": false, 00:24:02.061 "dhchap_key": "key1", 00:24:02.061 "dhchap_ctrlr_key": "ckey2", 00:24:02.061 "allow_unrecognized_csi": false, 00:24:02.061 "method": "bdev_nvme_attach_controller", 00:24:02.061 "req_id": 1 00:24:02.061 } 00:24:02.061 Got JSON-RPC error response 00:24:02.061 response: 00:24:02.061 { 00:24:02.061 "code": -5, 00:24:02.061 "message": "Input/output error" 00:24:02.061 } 00:24:02.061 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:02.061 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:02.320 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.321 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.321 05:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.579 request: 00:24:02.579 { 00:24:02.579 "name": "nvme0", 00:24:02.579 "trtype": "rdma", 00:24:02.579 "traddr": "192.168.100.8", 00:24:02.579 "adrfam": "ipv4", 00:24:02.579 "trsvcid": "4420", 00:24:02.579 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:02.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:02.579 "prchk_reftag": false, 00:24:02.579 "prchk_guard": false, 00:24:02.579 "hdgst": false, 00:24:02.579 "ddgst": false, 00:24:02.579 "dhchap_key": "key1", 00:24:02.579 "dhchap_ctrlr_key": "ckey1", 00:24:02.579 "allow_unrecognized_csi": false, 00:24:02.579 "method": "bdev_nvme_attach_controller", 00:24:02.579 "req_id": 1 00:24:02.579 } 00:24:02.579 Got JSON-RPC error response 00:24:02.579 response: 00:24:02.579 { 00:24:02.579 "code": -5, 00:24:02.579 "message": "Input/output error" 00:24:02.579 } 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3381900 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3381900 ']' 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3381900 00:24:02.579 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:02.837 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.837 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3381900 00:24:02.837 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.837 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.837 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3381900' 00:24:02.837 killing process with pid 3381900 00:24:02.837 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3381900 00:24:02.837 05:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3381900 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3406564 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3406564 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3406564 ']' 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.214 05:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3406564 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3406564 ']' 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.150 05:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.409 null0 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.h6t 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.4Fy ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Fy 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.d8O 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.3VY ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3VY 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gnF 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.I4m ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I4m 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.30F 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:05.668 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:05.669 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:05.669 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.669 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:05.669 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.669 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.669 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.669 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:05.669 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:05.669 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:06.606 nvme0n1 00:24:06.606 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:06.606 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:06.606 05:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:06.606 { 00:24:06.606 "cntlid": 1, 00:24:06.606 "qid": 0, 00:24:06.606 "state": "enabled", 00:24:06.606 "thread": "nvmf_tgt_poll_group_000", 00:24:06.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:06.606 "listen_address": { 00:24:06.606 "trtype": "RDMA", 00:24:06.606 "adrfam": "IPv4", 00:24:06.606 "traddr": "192.168.100.8", 00:24:06.606 "trsvcid": "4420" 00:24:06.606 }, 00:24:06.606 "peer_address": { 00:24:06.606 "trtype": "RDMA", 00:24:06.606 "adrfam": "IPv4", 00:24:06.606 "traddr": "192.168.100.8", 00:24:06.606 "trsvcid": "49591" 00:24:06.606 }, 00:24:06.606 "auth": { 00:24:06.606 "state": "completed", 00:24:06.606 "digest": "sha512", 00:24:06.606 "dhgroup": "ffdhe8192" 00:24:06.606 } 00:24:06.606 } 00:24:06.606 ]' 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.606 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.865 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:24:06.865 05:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:24:07.447 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.705 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:07.705 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.705 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.705 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.705 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:07.705 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.705 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.705 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.705 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:07.705 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:07.962 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:07.962 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:07.962 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:07.962 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:07.962 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:07.962 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:07.962 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:07.962 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:07.962 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:07.962 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:08.220 request: 00:24:08.220 { 00:24:08.220 "name": "nvme0", 00:24:08.220 "trtype": "rdma", 00:24:08.220 "traddr": "192.168.100.8", 00:24:08.220 "adrfam": "ipv4", 00:24:08.220 "trsvcid": "4420", 00:24:08.220 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:08.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:08.220 "prchk_reftag": false, 00:24:08.220 "prchk_guard": false, 00:24:08.220 "hdgst": false, 00:24:08.220 "ddgst": false, 00:24:08.220 "dhchap_key": "key3", 00:24:08.220 "allow_unrecognized_csi": false, 00:24:08.220 "method": "bdev_nvme_attach_controller", 00:24:08.220 "req_id": 1 00:24:08.221 } 00:24:08.221 Got JSON-RPC error response 00:24:08.221 response: 00:24:08.221 { 00:24:08.221 "code": -5, 00:24:08.221 "message": "Input/output error" 00:24:08.221 } 00:24:08.221 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:08.221 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.221 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.221 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.221 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:24:08.221 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:24:08.221 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:08.221 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:08.480 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:08.480 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:08.480 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:08.480 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:08.480 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.480 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:08.480 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.480 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:08.480 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:08.480 05:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:08.739 request: 00:24:08.739 { 00:24:08.739 "name": "nvme0", 00:24:08.739 "trtype": "rdma", 00:24:08.739 "traddr": "192.168.100.8", 00:24:08.739 "adrfam": "ipv4", 00:24:08.739 "trsvcid": "4420", 00:24:08.739 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:08.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:08.739 "prchk_reftag": false, 00:24:08.739 "prchk_guard": false, 00:24:08.739 "hdgst": false, 00:24:08.739 "ddgst": false, 00:24:08.739 "dhchap_key": "key3", 00:24:08.739 "allow_unrecognized_csi": false, 00:24:08.739 "method": "bdev_nvme_attach_controller", 00:24:08.739 "req_id": 1 00:24:08.739 } 00:24:08.739 Got JSON-RPC error response 00:24:08.739 response: 00:24:08.739 { 00:24:08.739 "code": -5, 00:24:08.739 "message": "Input/output error" 00:24:08.739 } 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.739 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:08.997 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:09.256 request: 00:24:09.256 { 00:24:09.256 "name": "nvme0", 00:24:09.256 "trtype": "rdma", 00:24:09.256 "traddr": "192.168.100.8", 00:24:09.256 "adrfam": "ipv4", 00:24:09.256 "trsvcid": "4420", 00:24:09.256 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:09.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:09.256 "prchk_reftag": false, 00:24:09.256 "prchk_guard": false, 00:24:09.256 "hdgst": false, 00:24:09.256 "ddgst": false, 00:24:09.256 "dhchap_key": "key0", 00:24:09.256 "dhchap_ctrlr_key": "key1", 00:24:09.256 "allow_unrecognized_csi": false, 00:24:09.256 "method": "bdev_nvme_attach_controller", 00:24:09.256 "req_id": 1 00:24:09.256 } 00:24:09.256 Got JSON-RPC error response 00:24:09.256 response: 00:24:09.256 { 00:24:09.256 "code": -5, 00:24:09.256 "message": "Input/output error" 00:24:09.256 } 00:24:09.256 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:09.256 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.256 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.256 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.256 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:24:09.256 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:09.256 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:09.514 nvme0n1 00:24:09.514 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:24:09.514 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:24:09.514 05:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.772 05:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.772 05:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:09.773 05:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:09.773 05:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:24:09.773 05:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.773 05:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.030 05:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.030 05:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:10.031 05:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:10.031 05:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:10.596 nvme0n1 00:24:10.596 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:24:10.596 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:24:10.596 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.856 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.856 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:10.856 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.856 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.856 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.856 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:24:10.856 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.856 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:24:11.115 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.115 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:24:11.115 05:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: --dhchap-ctrl-secret DHHC-1:03:YTU2OTA5NzA5NWUyNDkzYzg5ZWUyMzk3MjQzNDU1OWQ2ZmNhMDQxYTFhNzIyNzBhY2U2YzI1MzI1NjJhNzAxMf67Oiw=: 00:24:11.683 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:24:11.683 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:24:11.683 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:24:11.683 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:24:11.683 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:24:11.683 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:24:11.683 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:24:11.683 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.683 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:11.942 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:24:11.942 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:11.942 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:24:11.942 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:11.942 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.942 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:11.942 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.942 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:11.942 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:11.942 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:12.510 request: 00:24:12.510 { 00:24:12.510 "name": "nvme0", 00:24:12.510 "trtype": "rdma", 00:24:12.510 "traddr": "192.168.100.8", 00:24:12.510 "adrfam": "ipv4", 00:24:12.510 "trsvcid": "4420", 00:24:12.510 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:12.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:12.510 "prchk_reftag": false, 00:24:12.510 "prchk_guard": false, 00:24:12.510 "hdgst": false, 00:24:12.510 "ddgst": false, 00:24:12.510 "dhchap_key": "key1", 00:24:12.510 "allow_unrecognized_csi": false, 00:24:12.510 "method": "bdev_nvme_attach_controller", 00:24:12.510 "req_id": 1 00:24:12.510 } 00:24:12.510 Got JSON-RPC error response 00:24:12.510 response: 00:24:12.510 { 00:24:12.510 "code": -5, 00:24:12.510 "message": "Input/output error" 00:24:12.510 } 00:24:12.510 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:12.510 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:12.510 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:12.510 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:12.510 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:12.510 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:12.510 05:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:13.097 nvme0n1 00:24:13.098 05:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:24:13.098 05:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.098 05:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:24:13.356 05:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.356 05:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.356 05:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:13.615 05:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:13.615 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.615 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.615 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.615 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:24:13.615 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:13.615 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:13.874 nvme0n1 00:24:13.874 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:13.874 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:13.874 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: '' 2s 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: ]] 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTNkYjY5NmYwOTk1MTQ0MDVhNjAzMGQyNWE5OWI0YWVUaVmw: 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:14.134 05:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: 2s 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: ]] 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWEzZTdhYmYxMDJmZTI0OGJjZGJmYjEwZjBhODAxNjhjYzg5MTkzN2JlNDcxNjQ2T2xv5Q==: 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:16.668 05:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:18.572 05:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:19.140 nvme0n1 00:24:19.140 05:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:19.140 05:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.140 05:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.140 05:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.140 05:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:19.140 05:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:19.708 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:19.708 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:19.708 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.966 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.967 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:19.967 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.967 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.967 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.967 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:19.967 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:19.967 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:19.967 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:19.967 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:20.226 05:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:20.794 request: 00:24:20.794 { 00:24:20.794 "name": "nvme0", 00:24:20.794 "dhchap_key": "key1", 00:24:20.794 "dhchap_ctrlr_key": "key3", 00:24:20.794 "method": "bdev_nvme_set_keys", 00:24:20.794 "req_id": 1 00:24:20.794 } 00:24:20.794 Got JSON-RPC error response 00:24:20.794 response: 00:24:20.794 { 00:24:20.794 "code": -13, 00:24:20.794 "message": "Permission denied" 00:24:20.794 } 00:24:20.794 05:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:20.794 05:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:20.794 05:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:20.794 05:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:20.794 05:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:20.794 05:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:20.794 05:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.794 05:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:20.794 05:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:22.171 05:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:22.744 nvme0n1 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:22.744 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:23.311 request: 00:24:23.311 { 00:24:23.311 "name": "nvme0", 00:24:23.311 "dhchap_key": "key2", 00:24:23.311 "dhchap_ctrlr_key": "key0", 00:24:23.311 "method": "bdev_nvme_set_keys", 00:24:23.311 "req_id": 1 00:24:23.311 } 00:24:23.311 Got JSON-RPC error response 00:24:23.311 response: 00:24:23.311 { 00:24:23.311 "code": -13, 00:24:23.311 "message": "Permission denied" 00:24:23.311 } 00:24:23.311 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:23.311 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:23.311 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:23.311 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:23.311 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:23.311 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:23.311 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:23.571 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:23.571 05:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:24.506 05:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:24.506 05:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:24.506 05:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3382140 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3382140 ']' 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3382140 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3382140 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3382140' 00:24:24.765 killing process with pid 3382140 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3382140 00:24:24.765 05:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3382140 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:27.302 rmmod nvme_rdma 00:24:27.302 rmmod nvme_fabrics 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3406564 ']' 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3406564 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3406564 ']' 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3406564 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3406564 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3406564' 00:24:27.302 killing process with pid 3406564 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3406564 00:24:27.302 05:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3406564 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.h6t /tmp/spdk.key-sha256.d8O /tmp/spdk.key-sha384.gnF /tmp/spdk.key-sha512.30F /tmp/spdk.key-sha512.4Fy /tmp/spdk.key-sha384.3VY /tmp/spdk.key-sha256.I4m '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:24:28.250 00:24:28.250 real 2m49.877s 00:24:28.250 user 6m21.073s 00:24:28.250 sys 0m26.129s 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.250 ************************************ 00:24:28.250 END TEST nvmf_auth_target 00:24:28.250 ************************************ 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:28.250 ************************************ 00:24:28.250 START TEST nvmf_fuzz 00:24:28.250 ************************************ 00:24:28.250 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:24:28.550 * Looking for test storage... 00:24:28.550 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.550 --rc genhtml_branch_coverage=1 00:24:28.550 --rc genhtml_function_coverage=1 00:24:28.550 --rc genhtml_legend=1 00:24:28.550 --rc geninfo_all_blocks=1 00:24:28.550 --rc geninfo_unexecuted_blocks=1 00:24:28.550 00:24:28.550 ' 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.550 --rc genhtml_branch_coverage=1 00:24:28.550 --rc genhtml_function_coverage=1 00:24:28.550 --rc genhtml_legend=1 00:24:28.550 --rc geninfo_all_blocks=1 00:24:28.550 --rc geninfo_unexecuted_blocks=1 00:24:28.550 00:24:28.550 ' 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.550 --rc genhtml_branch_coverage=1 00:24:28.550 --rc genhtml_function_coverage=1 00:24:28.550 --rc genhtml_legend=1 00:24:28.550 --rc geninfo_all_blocks=1 00:24:28.550 --rc geninfo_unexecuted_blocks=1 00:24:28.550 00:24:28.550 ' 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.550 --rc genhtml_branch_coverage=1 00:24:28.550 --rc genhtml_function_coverage=1 00:24:28.550 --rc genhtml_legend=1 00:24:28.550 --rc geninfo_all_blocks=1 00:24:28.550 --rc geninfo_unexecuted_blocks=1 00:24:28.550 00:24:28.550 ' 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.550 05:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.550 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.551 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.551 05:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:36.757 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:36.757 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:36.758 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:36.758 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:36.758 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:36.758 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:37.018 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:37.018 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:37.018 altname enp217s0f0np0 00:24:37.018 altname ens818f0np0 00:24:37.018 inet 192.168.100.8/24 scope global mlx_0_0 00:24:37.018 valid_lft forever preferred_lft forever 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:37.018 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:37.018 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:37.018 altname enp217s0f1np1 00:24:37.018 altname ens818f1np1 00:24:37.018 inet 192.168.100.9/24 scope global mlx_0_1 00:24:37.018 valid_lft forever preferred_lft forever 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:37.018 192.168.100.9' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:37.018 192.168.100.9' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:37.018 192.168.100.9' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3414767 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3414767 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3414767 ']' 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.018 05:41:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.957 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.957 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:37.957 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:37.957 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.957 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.957 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.957 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:37.957 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.957 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:38.216 Malloc0 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:24:38.216 05:41:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:25:10.291 Fuzzing completed. Shutting down the fuzz application 00:25:10.291 00:25:10.291 Dumping successful admin opcodes: 00:25:10.291 9, 10, 00:25:10.291 Dumping successful io opcodes: 00:25:10.291 0, 9, 00:25:10.291 NS: 0x2000008f0ec0 I/O qp, Total commands completed: 795955, total successful commands: 4631, random_seed: 4151877888 00:25:10.291 NS: 0x2000008f0ec0 admin qp, Total commands completed: 135824, total successful commands: 30, random_seed: 206306368 00:25:10.291 05:42:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:10.550 Fuzzing completed. Shutting down the fuzz application 00:25:10.550 00:25:10.550 Dumping successful admin opcodes: 00:25:10.550 00:25:10.550 Dumping successful io opcodes: 00:25:10.550 00:25:10.550 NS: 0x2000008f0ec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2006096864 00:25:10.550 NS: 0x2000008f0ec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 2006187288 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:10.550 rmmod nvme_rdma 00:25:10.550 rmmod nvme_fabrics 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3414767 ']' 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3414767 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3414767 ']' 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3414767 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3414767 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3414767' 00:25:10.550 killing process with pid 3414767 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3414767 00:25:10.550 05:42:07 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3414767 00:25:12.452 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:12.452 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:12.452 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:12.452 00:25:12.452 real 0m43.773s 00:25:12.453 user 0m55.498s 00:25:12.453 sys 0m20.994s 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 ************************************ 00:25:12.453 END TEST nvmf_fuzz 00:25:12.453 ************************************ 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 ************************************ 00:25:12.453 START TEST nvmf_multiconnection 00:25:12.453 ************************************ 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:25:12.453 * Looking for test storage... 00:25:12.453 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.453 --rc genhtml_branch_coverage=1 00:25:12.453 --rc genhtml_function_coverage=1 00:25:12.453 --rc genhtml_legend=1 00:25:12.453 --rc geninfo_all_blocks=1 00:25:12.453 --rc geninfo_unexecuted_blocks=1 00:25:12.453 00:25:12.453 ' 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.453 --rc genhtml_branch_coverage=1 00:25:12.453 --rc genhtml_function_coverage=1 00:25:12.453 --rc genhtml_legend=1 00:25:12.453 --rc geninfo_all_blocks=1 00:25:12.453 --rc geninfo_unexecuted_blocks=1 00:25:12.453 00:25:12.453 ' 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.453 --rc genhtml_branch_coverage=1 00:25:12.453 --rc genhtml_function_coverage=1 00:25:12.453 --rc genhtml_legend=1 00:25:12.453 --rc geninfo_all_blocks=1 00:25:12.453 --rc geninfo_unexecuted_blocks=1 00:25:12.453 00:25:12.453 ' 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.453 --rc genhtml_branch_coverage=1 00:25:12.453 --rc genhtml_function_coverage=1 00:25:12.453 --rc genhtml_legend=1 00:25:12.453 --rc geninfo_all_blocks=1 00:25:12.453 --rc geninfo_unexecuted_blocks=1 00:25:12.453 00:25:12.453 ' 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.453 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.454 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:12.454 05:42:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.588 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.588 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.588 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:20.589 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:20.589 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:20.589 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:20.589 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:20.589 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:20.590 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:20.590 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:20.590 altname enp217s0f0np0 00:25:20.590 altname ens818f0np0 00:25:20.590 inet 192.168.100.8/24 scope global mlx_0_0 00:25:20.590 valid_lft forever preferred_lft forever 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:20.590 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:20.590 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:20.590 altname enp217s0f1np1 00:25:20.590 altname ens818f1np1 00:25:20.590 inet 192.168.100.9/24 scope global mlx_0_1 00:25:20.590 valid_lft forever preferred_lft forever 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:20.590 192.168.100.9' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:20.590 192.168.100.9' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:20.590 192.168.100.9' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3425144 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3425144 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3425144 ']' 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.590 05:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:20.590 [2024-11-27 05:42:17.081107] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:25:20.590 [2024-11-27 05:42:17.081215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.849 [2024-11-27 05:42:17.233649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.849 [2024-11-27 05:42:17.331329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.850 [2024-11-27 05:42:17.331381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.850 [2024-11-27 05:42:17.331394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.850 [2024-11-27 05:42:17.331406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.850 [2024-11-27 05:42:17.331415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.850 [2024-11-27 05:42:17.333837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.850 [2024-11-27 05:42:17.333912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.850 [2024-11-27 05:42:17.333973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.850 [2024-11-27 05:42:17.333981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.418 05:42:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.418 05:42:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:21.418 05:42:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.418 05:42:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.418 05:42:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.418 05:42:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.418 05:42:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:21.418 05:42:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.418 05:42:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.418 [2024-11-27 05:42:17.975968] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f1003dbd940) succeed. 00:25:21.418 [2024-11-27 05:42:17.985861] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f1003d79940) succeed. 00:25:21.678 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.678 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:21.678 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.678 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:21.678 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.678 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.937 Malloc1 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.937 [2024-11-27 05:42:18.348695] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.937 Malloc2 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.937 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.938 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:21.938 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.938 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.938 Malloc3 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.197 Malloc4 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.197 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.198 Malloc5 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.198 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.457 Malloc6 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.457 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.458 Malloc7 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.458 05:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.458 Malloc8 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.458 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.717 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.717 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.717 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:22.717 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.717 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.717 Malloc9 00:25:22.717 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.718 Malloc10 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.718 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.977 Malloc11 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.977 05:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:23.915 05:42:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:23.915 05:42:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:23.915 05:42:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.915 05:42:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:23.915 05:42:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:25.820 05:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:25.820 05:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:25.820 05:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:25.820 05:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:25.820 05:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.820 05:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:25.820 05:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.820 05:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:25:26.757 05:42:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:26.757 05:42:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:26.757 05:42:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.757 05:42:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:26.757 05:42:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:29.291 05:42:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:29.291 05:42:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:29.291 05:42:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:29.291 05:42:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:29.291 05:42:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.291 05:42:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:29.291 05:42:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.291 05:42:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:25:29.860 05:42:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:29.860 05:42:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:29.860 05:42:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.860 05:42:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:29.860 05:42:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:32.395 05:42:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:32.395 05:42:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:32.395 05:42:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:32.395 05:42:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:32.395 05:42:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.395 05:42:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:32.395 05:42:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.395 05:42:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:25:32.963 05:42:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:32.963 05:42:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:32.963 05:42:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.963 05:42:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:32.963 05:42:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:34.870 05:42:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:34.870 05:42:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:34.870 05:42:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:34.870 05:42:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:34.870 05:42:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.870 05:42:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:34.870 05:42:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.870 05:42:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:25:35.808 05:42:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:35.808 05:42:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:35.808 05:42:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.808 05:42:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:35.808 05:42:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:38.334 05:42:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:38.334 05:42:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:38.334 05:42:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:38.334 05:42:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:38.334 05:42:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.334 05:42:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:38.334 05:42:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.334 05:42:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:25:38.901 05:42:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:38.901 05:42:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:38.901 05:42:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.901 05:42:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:38.901 05:42:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:41.434 05:42:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:41.434 05:42:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:41.434 05:42:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:41.434 05:42:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:41.434 05:42:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.434 05:42:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:41.434 05:42:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.434 05:42:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:25:42.002 05:42:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:42.002 05:42:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:42.002 05:42:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.002 05:42:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:42.002 05:42:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:43.910 05:42:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:43.910 05:42:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:43.910 05:42:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:43.910 05:42:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:43.910 05:42:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.910 05:42:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:43.910 05:42:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.910 05:42:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:25:45.288 05:42:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:45.288 05:42:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:45.288 05:42:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.288 05:42:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:45.288 05:42:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.196 05:42:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.196 05:42:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.196 05:42:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:47.196 05:42:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.196 05:42:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.196 05:42:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:47.196 05:42:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.196 05:42:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:25:48.134 05:42:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:48.134 05:42:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:48.134 05:42:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.134 05:42:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:48.134 05:42:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:50.040 05:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:50.040 05:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:50.040 05:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:50.040 05:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:50.040 05:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.040 05:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:50.040 05:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.040 05:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:25:50.977 05:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:50.977 05:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:50.977 05:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.977 05:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:50.977 05:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:53.514 05:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:53.514 05:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:53.514 05:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:53.514 05:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:53.514 05:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.514 05:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:53.514 05:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.514 05:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:25:54.083 05:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:54.083 05:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:54.083 05:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.083 05:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:54.083 05:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:55.989 05:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:55.990 05:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:55.990 05:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:55.990 05:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:55.990 05:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.990 05:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:55.990 05:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:55.990 [global] 00:25:55.990 thread=1 00:25:55.990 invalidate=1 00:25:55.990 rw=read 00:25:55.990 time_based=1 00:25:55.990 runtime=10 00:25:55.990 ioengine=libaio 00:25:55.990 direct=1 00:25:55.990 bs=262144 00:25:55.990 iodepth=64 00:25:55.990 norandommap=1 00:25:55.990 numjobs=1 00:25:55.990 00:25:55.990 [job0] 00:25:55.990 filename=/dev/nvme0n1 00:25:55.990 [job1] 00:25:55.990 filename=/dev/nvme10n1 00:25:55.990 [job2] 00:25:55.990 filename=/dev/nvme1n1 00:25:55.990 [job3] 00:25:55.990 filename=/dev/nvme2n1 00:25:55.990 [job4] 00:25:55.990 filename=/dev/nvme3n1 00:25:56.249 [job5] 00:25:56.249 filename=/dev/nvme4n1 00:25:56.249 [job6] 00:25:56.249 filename=/dev/nvme5n1 00:25:56.249 [job7] 00:25:56.249 filename=/dev/nvme6n1 00:25:56.249 [job8] 00:25:56.249 filename=/dev/nvme7n1 00:25:56.249 [job9] 00:25:56.249 filename=/dev/nvme8n1 00:25:56.249 [job10] 00:25:56.249 filename=/dev/nvme9n1 00:25:56.249 Could not set queue depth (nvme0n1) 00:25:56.249 Could not set queue depth (nvme10n1) 00:25:56.249 Could not set queue depth (nvme1n1) 00:25:56.249 Could not set queue depth (nvme2n1) 00:25:56.249 Could not set queue depth (nvme3n1) 00:25:56.249 Could not set queue depth (nvme4n1) 00:25:56.249 Could not set queue depth (nvme5n1) 00:25:56.249 Could not set queue depth (nvme6n1) 00:25:56.249 Could not set queue depth (nvme7n1) 00:25:56.249 Could not set queue depth (nvme8n1) 00:25:56.249 Could not set queue depth (nvme9n1) 00:25:56.508 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.508 fio-3.35 00:25:56.508 Starting 11 threads 00:26:08.724 00:26:08.724 job0: (groupid=0, jobs=1): err= 0: pid=3431530: Wed Nov 27 05:43:03 2024 00:26:08.724 read: IOPS=898, BW=225MiB/s (236MB/s)(2262MiB/10067msec) 00:26:08.724 slat (usec): min=12, max=43981, avg=1069.98, stdev=3199.54 00:26:08.724 clat (msec): min=12, max=149, avg=70.07, stdev=15.47 00:26:08.724 lat (msec): min=13, max=149, avg=71.14, stdev=15.95 00:26:08.724 clat percentiles (msec): 00:26:08.724 | 1.00th=[ 49], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:26:08.724 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 69], 60.00th=[ 72], 00:26:08.724 | 70.00th=[ 74], 80.00th=[ 90], 90.00th=[ 92], 95.00th=[ 94], 00:26:08.724 | 99.00th=[ 102], 99.50th=[ 111], 99.90th=[ 144], 99.95th=[ 150], 00:26:08.724 | 99.99th=[ 150] 00:26:08.724 bw ( KiB/s): min=162816, max=294912, per=6.68%, avg=229990.40, stdev=44395.11, samples=20 00:26:08.724 iops : min= 636, max= 1152, avg=898.40, stdev=173.42, samples=20 00:26:08.724 lat (msec) : 20=0.28%, 50=1.15%, 100=97.47%, 250=1.11% 00:26:08.724 cpu : usr=0.53%, sys=4.19%, ctx=1790, majf=0, minf=4097 00:26:08.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:08.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.724 issued rwts: total=9047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.724 job1: (groupid=0, jobs=1): err= 0: pid=3431538: Wed Nov 27 05:43:03 2024 00:26:08.724 read: IOPS=1637, BW=409MiB/s (429MB/s)(4100MiB/10015msec) 00:26:08.724 slat (usec): min=10, max=18921, avg=601.30, stdev=1646.84 00:26:08.724 clat (usec): min=12883, max=98566, avg=38440.26, stdev=23649.76 00:26:08.724 lat (msec): min=13, max=105, avg=39.04, stdev=24.05 00:26:08.724 clat percentiles (usec): 00:26:08.724 | 1.00th=[14615], 5.00th=[15139], 10.00th=[16057], 20.00th=[16581], 00:26:08.724 | 30.00th=[16909], 40.00th=[17171], 50.00th=[18482], 60.00th=[52691], 00:26:08.724 | 70.00th=[55313], 80.00th=[66323], 90.00th=[70779], 95.00th=[72877], 00:26:08.724 | 99.00th=[80217], 99.50th=[84411], 99.90th=[89654], 99.95th=[91751], 00:26:08.724 | 99.99th=[98042] 00:26:08.724 bw ( KiB/s): min=212992, max=978944, per=12.14%, avg=418276.60, stdev=295266.98, samples=20 00:26:08.724 iops : min= 832, max= 3824, avg=1633.85, stdev=1153.41, samples=20 00:26:08.724 lat (msec) : 20=51.73%, 50=1.18%, 100=47.09% 00:26:08.724 cpu : usr=0.51%, sys=5.66%, ctx=2997, majf=0, minf=4097 00:26:08.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:08.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.724 issued rwts: total=16401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.724 job2: (groupid=0, jobs=1): err= 0: pid=3431557: Wed Nov 27 05:43:03 2024 00:26:08.724 read: IOPS=1535, BW=384MiB/s (403MB/s)(3864MiB/10064msec) 00:26:08.724 slat (usec): min=10, max=56589, avg=634.52, stdev=2807.81 00:26:08.724 clat (usec): min=1058, max=145796, avg=40996.18, stdev=29633.60 00:26:08.724 lat (usec): min=1100, max=147473, avg=41630.70, stdev=30190.52 00:26:08.724 clat percentiles (msec): 00:26:08.724 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 18], 00:26:08.724 | 30.00th=[ 18], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 36], 00:26:08.724 | 70.00th=[ 67], 80.00th=[ 73], 90.00th=[ 91], 95.00th=[ 93], 00:26:08.724 | 99.00th=[ 97], 99.50th=[ 107], 99.90th=[ 138], 99.95th=[ 144], 00:26:08.724 | 99.99th=[ 146] 00:26:08.724 bw ( KiB/s): min=161280, max=915968, per=11.44%, avg=394060.80, stdev=297399.07, samples=20 00:26:08.724 iops : min= 630, max= 3578, avg=1539.30, stdev=1161.72, samples=20 00:26:08.724 lat (msec) : 2=0.16%, 4=0.27%, 10=0.73%, 20=50.78%, 50=14.51% 00:26:08.724 lat (msec) : 100=32.87%, 250=0.68% 00:26:08.724 cpu : usr=0.45%, sys=4.79%, ctx=3249, majf=0, minf=4097 00:26:08.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:08.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.724 issued rwts: total=15456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.724 job3: (groupid=0, jobs=1): err= 0: pid=3431566: Wed Nov 27 05:43:03 2024 00:26:08.724 read: IOPS=1808, BW=452MiB/s (474MB/s)(4537MiB/10031msec) 00:26:08.724 slat (usec): min=11, max=20338, avg=539.11, stdev=1487.39 00:26:08.724 clat (usec): min=12819, max=95529, avg=34804.26, stdev=20143.61 00:26:08.724 lat (usec): min=13085, max=98282, avg=35343.37, stdev=20483.02 00:26:08.724 clat percentiles (usec): 00:26:08.724 | 1.00th=[16188], 5.00th=[16712], 10.00th=[17171], 20.00th=[17695], 00:26:08.724 | 30.00th=[18220], 40.00th=[18744], 50.00th=[33817], 60.00th=[34341], 00:26:08.724 | 70.00th=[35390], 80.00th=[64750], 90.00th=[70779], 95.00th=[72877], 00:26:08.724 | 99.00th=[80217], 99.50th=[85459], 99.90th=[88605], 99.95th=[89654], 00:26:08.724 | 99.99th=[91751] 00:26:08.724 bw ( KiB/s): min=214528, max=912896, per=13.44%, avg=462924.80, stdev=262279.19, samples=20 00:26:08.724 iops : min= 838, max= 3566, avg=1808.30, stdev=1024.53, samples=20 00:26:08.725 lat (msec) : 20=43.77%, 50=35.13%, 100=21.10% 00:26:08.725 cpu : usr=0.54%, sys=5.96%, ctx=3803, majf=0, minf=4097 00:26:08.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:08.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.725 issued rwts: total=18146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.725 job4: (groupid=0, jobs=1): err= 0: pid=3431572: Wed Nov 27 05:43:03 2024 00:26:08.725 read: IOPS=1469, BW=367MiB/s (385MB/s)(3686MiB/10031msec) 00:26:08.725 slat (usec): min=11, max=15670, avg=668.09, stdev=1603.79 00:26:08.725 clat (usec): min=11341, max=82468, avg=42834.04, stdev=9081.40 00:26:08.725 lat (usec): min=11612, max=82508, avg=43502.13, stdev=9300.54 00:26:08.725 clat percentiles (usec): 00:26:08.725 | 1.00th=[30540], 5.00th=[33424], 10.00th=[33817], 20.00th=[34341], 00:26:08.725 | 30.00th=[35390], 40.00th=[35914], 50.00th=[37487], 60.00th=[48497], 00:26:08.725 | 70.00th=[50594], 80.00th=[52691], 90.00th=[54264], 95.00th=[55837], 00:26:08.725 | 99.00th=[60031], 99.50th=[62653], 99.90th=[79168], 99.95th=[81265], 00:26:08.725 | 99.99th=[82314] 00:26:08.725 bw ( KiB/s): min=287744, max=463872, per=10.91%, avg=375815.05, stdev=70576.99, samples=20 00:26:08.725 iops : min= 1124, max= 1812, avg=1468.00, stdev=275.71, samples=20 00:26:08.725 lat (msec) : 20=0.31%, 50=66.80%, 100=32.89% 00:26:08.725 cpu : usr=0.73%, sys=6.43%, ctx=2898, majf=0, minf=4097 00:26:08.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:08.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.725 issued rwts: total=14742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.725 job5: (groupid=0, jobs=1): err= 0: pid=3431593: Wed Nov 27 05:43:03 2024 00:26:08.725 read: IOPS=1290, BW=323MiB/s (338MB/s)(3240MiB/10044msec) 00:26:08.725 slat (usec): min=11, max=27836, avg=746.29, stdev=2035.19 00:26:08.725 clat (usec): min=11807, max=99081, avg=48808.27, stdev=10876.44 00:26:08.725 lat (msec): min=12, max=103, avg=49.55, stdev=11.13 00:26:08.725 clat percentiles (usec): 00:26:08.725 | 1.00th=[33162], 5.00th=[33817], 10.00th=[34866], 20.00th=[36439], 00:26:08.725 | 30.00th=[39584], 40.00th=[49021], 50.00th=[51119], 60.00th=[52691], 00:26:08.725 | 70.00th=[53740], 80.00th=[54789], 90.00th=[57934], 95.00th=[71828], 00:26:08.725 | 99.00th=[77071], 99.50th=[85459], 99.90th=[94897], 99.95th=[95945], 00:26:08.725 | 99.99th=[99091] 00:26:08.725 bw ( KiB/s): min=223744, max=456704, per=9.58%, avg=330112.00, stdev=68490.43, samples=20 00:26:08.725 iops : min= 874, max= 1784, avg=1289.50, stdev=267.54, samples=20 00:26:08.725 lat (msec) : 20=0.29%, 50=44.34%, 100=55.36% 00:26:08.725 cpu : usr=0.44%, sys=5.90%, ctx=2536, majf=0, minf=4097 00:26:08.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:08.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.725 issued rwts: total=12958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.725 job6: (groupid=0, jobs=1): err= 0: pid=3431603: Wed Nov 27 05:43:03 2024 00:26:08.725 read: IOPS=882, BW=221MiB/s (231MB/s)(2222MiB/10067msec) 00:26:08.725 slat (usec): min=15, max=26709, avg=1120.59, stdev=2874.90 00:26:08.725 clat (msec): min=11, max=154, avg=71.28, stdev=15.31 00:26:08.725 lat (msec): min=11, max=159, avg=72.41, stdev=15.73 00:26:08.725 clat percentiles (msec): 00:26:08.725 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:26:08.725 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 70], 60.00th=[ 73], 00:26:08.725 | 70.00th=[ 78], 80.00th=[ 90], 90.00th=[ 92], 95.00th=[ 94], 00:26:08.725 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 146], 99.95th=[ 146], 00:26:08.725 | 99.99th=[ 155] 00:26:08.725 bw ( KiB/s): min=169984, max=296960, per=6.56%, avg=225945.60, stdev=44587.53, samples=20 00:26:08.725 iops : min= 664, max= 1160, avg=882.60, stdev=174.17, samples=20 00:26:08.725 lat (msec) : 20=0.28%, 50=0.67%, 100=97.67%, 250=1.37% 00:26:08.725 cpu : usr=0.40%, sys=4.43%, ctx=1687, majf=0, minf=3660 00:26:08.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:08.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.725 issued rwts: total=8889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.725 job7: (groupid=0, jobs=1): err= 0: pid=3431610: Wed Nov 27 05:43:03 2024 00:26:08.725 read: IOPS=881, BW=220MiB/s (231MB/s)(2218MiB/10065msec) 00:26:08.725 slat (usec): min=18, max=23523, avg=1122.01, stdev=2831.88 00:26:08.725 clat (msec): min=13, max=163, avg=71.40, stdev=15.37 00:26:08.725 lat (msec): min=13, max=163, avg=72.52, stdev=15.79 00:26:08.725 clat percentiles (msec): 00:26:08.725 | 1.00th=[ 52], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:26:08.725 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:26:08.725 | 70.00th=[ 78], 80.00th=[ 91], 90.00th=[ 92], 95.00th=[ 94], 00:26:08.725 | 99.00th=[ 104], 99.50th=[ 112], 99.90th=[ 153], 99.95th=[ 153], 00:26:08.725 | 99.99th=[ 163] 00:26:08.725 bw ( KiB/s): min=166912, max=300032, per=6.55%, avg=225559.70, stdev=44584.96, samples=20 00:26:08.725 iops : min= 652, max= 1172, avg=881.05, stdev=174.15, samples=20 00:26:08.725 lat (msec) : 20=0.26%, 50=0.50%, 100=98.06%, 250=1.18% 00:26:08.725 cpu : usr=0.50%, sys=4.42%, ctx=1715, majf=0, minf=4097 00:26:08.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:08.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.725 issued rwts: total=8873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.725 job8: (groupid=0, jobs=1): err= 0: pid=3431634: Wed Nov 27 05:43:03 2024 00:26:08.725 read: IOPS=957, BW=239MiB/s (251MB/s)(2410MiB/10067msec) 00:26:08.725 slat (usec): min=11, max=42330, avg=1020.63, stdev=3424.08 00:26:08.725 clat (usec): min=812, max=153063, avg=65749.59, stdev=18513.33 00:26:08.725 lat (usec): min=854, max=153081, avg=66770.23, stdev=19052.68 00:26:08.725 clat percentiles (msec): 00:26:08.725 | 1.00th=[ 28], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 51], 00:26:08.725 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 72], 00:26:08.725 | 70.00th=[ 75], 80.00th=[ 90], 90.00th=[ 92], 95.00th=[ 94], 00:26:08.725 | 99.00th=[ 102], 99.50th=[ 109], 99.90th=[ 131], 99.95th=[ 142], 00:26:08.725 | 99.99th=[ 153] 00:26:08.725 bw ( KiB/s): min=164352, max=329728, per=7.12%, avg=245177.55, stdev=63716.06, samples=20 00:26:08.725 iops : min= 642, max= 1288, avg=957.70, stdev=248.86, samples=20 00:26:08.725 lat (usec) : 1000=0.06% 00:26:08.725 lat (msec) : 2=0.12%, 4=0.35%, 20=0.37%, 50=17.90%, 100=79.97% 00:26:08.725 lat (msec) : 250=1.22% 00:26:08.725 cpu : usr=0.33%, sys=4.29%, ctx=1807, majf=0, minf=4097 00:26:08.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:08.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.725 issued rwts: total=9639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.725 job9: (groupid=0, jobs=1): err= 0: pid=3431645: Wed Nov 27 05:43:03 2024 00:26:08.725 read: IOPS=1062, BW=266MiB/s (279MB/s)(2668MiB/10044msec) 00:26:08.725 slat (usec): min=11, max=18270, avg=931.74, stdev=2313.40 00:26:08.725 clat (msec): min=13, max=101, avg=59.24, stdev=12.85 00:26:08.725 lat (msec): min=13, max=101, avg=60.17, stdev=13.18 00:26:08.725 clat percentiles (msec): 00:26:08.725 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 53], 00:26:08.725 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 66], 00:26:08.725 | 70.00th=[ 69], 80.00th=[ 71], 90.00th=[ 73], 95.00th=[ 77], 00:26:08.725 | 99.00th=[ 86], 99.50th=[ 89], 99.90th=[ 94], 99.95th=[ 95], 00:26:08.725 | 99.99th=[ 102] 00:26:08.725 bw ( KiB/s): min=213504, max=437760, per=7.88%, avg=271564.80, stdev=60996.55, samples=20 00:26:08.725 iops : min= 834, max= 1710, avg=1060.80, stdev=238.27, samples=20 00:26:08.725 lat (msec) : 20=0.21%, 50=14.63%, 100=85.15%, 250=0.02% 00:26:08.725 cpu : usr=0.39%, sys=4.74%, ctx=1942, majf=0, minf=4097 00:26:08.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:08.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.725 issued rwts: total=10671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.725 job10: (groupid=0, jobs=1): err= 0: pid=3431653: Wed Nov 27 05:43:03 2024 00:26:08.725 read: IOPS=1062, BW=266MiB/s (278MB/s)(2667MiB/10042msec) 00:26:08.725 slat (usec): min=13, max=29896, avg=933.13, stdev=2642.08 00:26:08.725 clat (msec): min=13, max=113, avg=59.26, stdev=12.92 00:26:08.725 lat (msec): min=13, max=113, avg=60.19, stdev=13.32 00:26:08.725 clat percentiles (msec): 00:26:08.725 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 53], 00:26:08.725 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 66], 00:26:08.725 | 70.00th=[ 70], 80.00th=[ 71], 90.00th=[ 73], 95.00th=[ 77], 00:26:08.725 | 99.00th=[ 87], 99.50th=[ 89], 99.90th=[ 97], 99.95th=[ 100], 00:26:08.725 | 99.99th=[ 113] 00:26:08.725 bw ( KiB/s): min=216064, max=432128, per=7.88%, avg=271462.40, stdev=59884.67, samples=20 00:26:08.725 iops : min= 844, max= 1688, avg=1060.40, stdev=233.92, samples=20 00:26:08.725 lat (msec) : 20=0.20%, 50=14.56%, 100=85.20%, 250=0.05% 00:26:08.725 cpu : usr=0.42%, sys=5.14%, ctx=1982, majf=0, minf=4097 00:26:08.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:08.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:08.726 issued rwts: total=10667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:08.726 00:26:08.726 Run status group 0 (all jobs): 00:26:08.726 READ: bw=3365MiB/s (3528MB/s), 220MiB/s-452MiB/s (231MB/s-474MB/s), io=33.1GiB (35.5GB), run=10015-10067msec 00:26:08.726 00:26:08.726 Disk stats (read/write): 00:26:08.726 nvme0n1: ios=17822/0, merge=0/0, ticks=1221803/0, in_queue=1221803, util=96.80% 00:26:08.726 nvme10n1: ios=31833/0, merge=0/0, ticks=1222714/0, in_queue=1222714, util=97.02% 00:26:08.726 nvme1n1: ios=30659/0, merge=0/0, ticks=1218171/0, in_queue=1218171, util=97.35% 00:26:08.726 nvme2n1: ios=35769/0, merge=0/0, ticks=1219925/0, in_queue=1219925, util=97.56% 00:26:08.726 nvme3n1: ios=28942/0, merge=0/0, ticks=1220777/0, in_queue=1220777, util=97.65% 00:26:08.726 nvme4n1: ios=25510/0, merge=0/0, ticks=1220657/0, in_queue=1220657, util=98.08% 00:26:08.726 nvme5n1: ios=17491/0, merge=0/0, ticks=1220971/0, in_queue=1220971, util=98.28% 00:26:08.726 nvme6n1: ios=17481/0, merge=0/0, ticks=1221547/0, in_queue=1221547, util=98.41% 00:26:08.726 nvme7n1: ios=18963/0, merge=0/0, ticks=1218055/0, in_queue=1218055, util=98.90% 00:26:08.726 nvme8n1: ios=20972/0, merge=0/0, ticks=1220558/0, in_queue=1220558, util=99.14% 00:26:08.726 nvme9n1: ios=20964/0, merge=0/0, ticks=1222809/0, in_queue=1222809, util=99.28% 00:26:08.726 05:43:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:08.726 [global] 00:26:08.726 thread=1 00:26:08.726 invalidate=1 00:26:08.726 rw=randwrite 00:26:08.726 time_based=1 00:26:08.726 runtime=10 00:26:08.726 ioengine=libaio 00:26:08.726 direct=1 00:26:08.726 bs=262144 00:26:08.726 iodepth=64 00:26:08.726 norandommap=1 00:26:08.726 numjobs=1 00:26:08.726 00:26:08.726 [job0] 00:26:08.726 filename=/dev/nvme0n1 00:26:08.726 [job1] 00:26:08.726 filename=/dev/nvme10n1 00:26:08.726 [job2] 00:26:08.726 filename=/dev/nvme1n1 00:26:08.726 [job3] 00:26:08.726 filename=/dev/nvme2n1 00:26:08.726 [job4] 00:26:08.726 filename=/dev/nvme3n1 00:26:08.726 [job5] 00:26:08.726 filename=/dev/nvme4n1 00:26:08.726 [job6] 00:26:08.726 filename=/dev/nvme5n1 00:26:08.726 [job7] 00:26:08.726 filename=/dev/nvme6n1 00:26:08.726 [job8] 00:26:08.726 filename=/dev/nvme7n1 00:26:08.726 [job9] 00:26:08.726 filename=/dev/nvme8n1 00:26:08.726 [job10] 00:26:08.726 filename=/dev/nvme9n1 00:26:08.726 Could not set queue depth (nvme0n1) 00:26:08.726 Could not set queue depth (nvme10n1) 00:26:08.726 Could not set queue depth (nvme1n1) 00:26:08.726 Could not set queue depth (nvme2n1) 00:26:08.726 Could not set queue depth (nvme3n1) 00:26:08.726 Could not set queue depth (nvme4n1) 00:26:08.726 Could not set queue depth (nvme5n1) 00:26:08.726 Could not set queue depth (nvme6n1) 00:26:08.726 Could not set queue depth (nvme7n1) 00:26:08.726 Could not set queue depth (nvme8n1) 00:26:08.726 Could not set queue depth (nvme9n1) 00:26:08.726 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:08.726 fio-3.35 00:26:08.726 Starting 11 threads 00:26:18.709 00:26:18.709 job0: (groupid=0, jobs=1): err= 0: pid=3433247: Wed Nov 27 05:43:14 2024 00:26:18.709 write: IOPS=885, BW=221MiB/s (232MB/s)(2232MiB/10078msec); 0 zone resets 00:26:18.709 slat (usec): min=24, max=25379, avg=1096.34, stdev=2212.09 00:26:18.709 clat (msec): min=17, max=170, avg=71.13, stdev=13.69 00:26:18.709 lat (msec): min=17, max=170, avg=72.23, stdev=13.96 00:26:18.709 clat percentiles (msec): 00:26:18.709 | 1.00th=[ 51], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:26:18.709 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 78], 00:26:18.709 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 97], 00:26:18.709 | 99.00th=[ 106], 99.50th=[ 115], 99.90th=[ 157], 99.95th=[ 169], 00:26:18.709 | 99.99th=[ 171] 00:26:18.709 bw ( KiB/s): min=159744, max=275456, per=7.16%, avg=226918.40, stdev=35521.22, samples=20 00:26:18.709 iops : min= 624, max= 1076, avg=886.40, stdev=138.75, samples=20 00:26:18.709 lat (msec) : 20=0.09%, 50=0.85%, 100=96.89%, 250=2.17% 00:26:18.709 cpu : usr=1.98%, sys=4.06%, ctx=2199, majf=0, minf=1 00:26:18.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:18.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.709 issued rwts: total=0,8927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.709 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.709 job1: (groupid=0, jobs=1): err= 0: pid=3433273: Wed Nov 27 05:43:14 2024 00:26:18.709 write: IOPS=1274, BW=319MiB/s (334MB/s)(3197MiB/10032msec); 0 zone resets 00:26:18.709 slat (usec): min=22, max=45457, avg=749.79, stdev=1599.97 00:26:18.709 clat (msec): min=6, max=144, avg=49.43, stdev=15.15 00:26:18.709 lat (msec): min=6, max=144, avg=50.18, stdev=15.35 00:26:18.709 clat percentiles (msec): 00:26:18.709 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:18.709 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:18.709 | 70.00th=[ 54], 80.00th=[ 62], 90.00th=[ 79], 95.00th=[ 82], 00:26:18.709 | 99.00th=[ 91], 99.50th=[ 100], 99.90th=[ 105], 99.95th=[ 108], 00:26:18.709 | 99.99th=[ 144] 00:26:18.709 bw ( KiB/s): min=198541, max=399872, per=10.28%, avg=325805.45, stdev=79799.78, samples=20 00:26:18.709 iops : min= 775, max= 1562, avg=1272.65, stdev=311.76, samples=20 00:26:18.709 lat (msec) : 10=0.07%, 20=0.37%, 50=69.01%, 100=30.15%, 250=0.40% 00:26:18.709 cpu : usr=2.75%, sys=4.60%, ctx=3076, majf=0, minf=1 00:26:18.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:18.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.709 issued rwts: total=0,12789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.709 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.709 job2: (groupid=0, jobs=1): err= 0: pid=3433286: Wed Nov 27 05:43:14 2024 00:26:18.709 write: IOPS=870, BW=218MiB/s (228MB/s)(2192MiB/10075msec); 0 zone resets 00:26:18.709 slat (usec): min=24, max=17922, avg=1126.76, stdev=2235.56 00:26:18.709 clat (msec): min=11, max=172, avg=72.40, stdev=13.97 00:26:18.709 lat (msec): min=11, max=172, avg=73.53, stdev=14.20 00:26:18.709 clat percentiles (msec): 00:26:18.709 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 59], 20.00th=[ 61], 00:26:18.709 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 79], 00:26:18.709 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 99], 00:26:18.709 | 99.00th=[ 107], 99.50th=[ 114], 99.90th=[ 161], 99.95th=[ 171], 00:26:18.709 | 99.99th=[ 174] 00:26:18.709 bw ( KiB/s): min=158208, max=270848, per=7.03%, avg=222816.60, stdev=37779.89, samples=20 00:26:18.709 iops : min= 618, max= 1058, avg=870.35, stdev=147.60, samples=20 00:26:18.709 lat (msec) : 20=0.08%, 50=0.30%, 100=96.45%, 250=3.17% 00:26:18.709 cpu : usr=1.93%, sys=3.56%, ctx=2134, majf=0, minf=1 00:26:18.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:18.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.709 issued rwts: total=0,8766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.709 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.709 job3: (groupid=0, jobs=1): err= 0: pid=3433294: Wed Nov 27 05:43:14 2024 00:26:18.709 write: IOPS=1164, BW=291MiB/s (305MB/s)(2934MiB/10078msec); 0 zone resets 00:26:18.709 slat (usec): min=24, max=20568, avg=848.79, stdev=1825.35 00:26:18.709 clat (msec): min=7, max=175, avg=54.09, stdev=19.08 00:26:18.709 lat (msec): min=7, max=175, avg=54.94, stdev=19.38 00:26:18.709 clat percentiles (msec): 00:26:18.709 | 1.00th=[ 37], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 41], 00:26:18.709 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 48], 00:26:18.709 | 70.00th=[ 60], 80.00th=[ 78], 90.00th=[ 82], 95.00th=[ 96], 00:26:18.709 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 144], 99.95th=[ 161], 00:26:18.709 | 99.99th=[ 176] 00:26:18.709 bw ( KiB/s): min=159744, max=397312, per=9.43%, avg=298777.60, stdev=92310.11, samples=20 00:26:18.709 iops : min= 624, max= 1552, avg=1167.10, stdev=360.59, samples=20 00:26:18.709 lat (msec) : 10=0.04%, 20=0.13%, 50=60.15%, 100=37.88%, 250=1.80% 00:26:18.709 cpu : usr=2.70%, sys=4.52%, ctx=2880, majf=0, minf=1 00:26:18.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:18.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.709 issued rwts: total=0,11735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.709 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.709 job4: (groupid=0, jobs=1): err= 0: pid=3433300: Wed Nov 27 05:43:14 2024 00:26:18.709 write: IOPS=907, BW=227MiB/s (238MB/s)(2280MiB/10050msec); 0 zone resets 00:26:18.709 slat (usec): min=22, max=15869, avg=1072.08, stdev=2050.98 00:26:18.709 clat (msec): min=2, max=113, avg=69.43, stdev=11.39 00:26:18.709 lat (msec): min=2, max=113, avg=70.50, stdev=11.61 00:26:18.709 clat percentiles (msec): 00:26:18.709 | 1.00th=[ 43], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:26:18.709 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 77], 00:26:18.709 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 83], 95.00th=[ 85], 00:26:18.709 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 109], 99.95th=[ 111], 00:26:18.709 | 99.99th=[ 113] 00:26:18.709 bw ( KiB/s): min=184832, max=269312, per=7.32%, avg=231859.45, stdev=28464.66, samples=20 00:26:18.709 iops : min= 722, max= 1052, avg=905.70, stdev=111.19, samples=20 00:26:18.709 lat (msec) : 4=0.05%, 10=0.09%, 20=0.13%, 50=1.20%, 100=97.85% 00:26:18.709 lat (msec) : 250=0.68% 00:26:18.709 cpu : usr=1.92%, sys=4.30%, ctx=2277, majf=0, minf=1 00:26:18.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:18.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.709 issued rwts: total=0,9121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.709 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.709 job5: (groupid=0, jobs=1): err= 0: pid=3433319: Wed Nov 27 05:43:14 2024 00:26:18.709 write: IOPS=912, BW=228MiB/s (239MB/s)(2291MiB/10048msec); 0 zone resets 00:26:18.709 slat (usec): min=25, max=21812, avg=1085.74, stdev=2112.48 00:26:18.709 clat (msec): min=4, max=114, avg=69.06, stdev=10.73 00:26:18.709 lat (msec): min=4, max=114, avg=70.15, stdev=10.92 00:26:18.709 clat percentiles (msec): 00:26:18.709 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:26:18.709 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 70], 00:26:18.709 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 83], 95.00th=[ 85], 00:26:18.709 | 99.00th=[ 99], 99.50th=[ 103], 99.90th=[ 107], 99.95th=[ 109], 00:26:18.709 | 99.99th=[ 114] 00:26:18.709 bw ( KiB/s): min=182784, max=269824, per=7.35%, avg=233011.20, stdev=30740.16, samples=20 00:26:18.709 iops : min= 714, max= 1054, avg=910.20, stdev=120.08, samples=20 00:26:18.709 lat (msec) : 10=0.05%, 20=0.11%, 50=0.41%, 100=98.59%, 250=0.83% 00:26:18.709 cpu : usr=2.28%, sys=4.00%, ctx=2266, majf=0, minf=1 00:26:18.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:18.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.710 issued rwts: total=0,9165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.710 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.710 job6: (groupid=0, jobs=1): err= 0: pid=3433329: Wed Nov 27 05:43:14 2024 00:26:18.710 write: IOPS=911, BW=228MiB/s (239MB/s)(2290MiB/10047msec); 0 zone resets 00:26:18.710 slat (usec): min=28, max=22711, avg=1086.55, stdev=2075.23 00:26:18.710 clat (msec): min=26, max=109, avg=69.10, stdev=10.26 00:26:18.710 lat (msec): min=26, max=113, avg=70.19, stdev=10.45 00:26:18.710 clat percentiles (msec): 00:26:18.710 | 1.00th=[ 57], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:26:18.710 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 70], 00:26:18.710 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 83], 95.00th=[ 85], 00:26:18.710 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 108], 99.95th=[ 109], 00:26:18.710 | 99.99th=[ 110] 00:26:18.710 bw ( KiB/s): min=185344, max=269312, per=7.35%, avg=232851.00, stdev=31097.18, samples=20 00:26:18.710 iops : min= 724, max= 1052, avg=909.55, stdev=121.51, samples=20 00:26:18.710 lat (msec) : 50=0.28%, 100=99.12%, 250=0.60% 00:26:18.710 cpu : usr=2.26%, sys=3.96%, ctx=2266, majf=0, minf=1 00:26:18.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:18.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.710 issued rwts: total=0,9158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.710 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.710 job7: (groupid=0, jobs=1): err= 0: pid=3433337: Wed Nov 27 05:43:14 2024 00:26:18.710 write: IOPS=1164, BW=291MiB/s (305MB/s)(2934MiB/10075msec); 0 zone resets 00:26:18.710 slat (usec): min=21, max=19331, avg=847.65, stdev=1810.18 00:26:18.710 clat (msec): min=10, max=171, avg=54.08, stdev=19.09 00:26:18.710 lat (msec): min=10, max=171, avg=54.93, stdev=19.39 00:26:18.710 clat percentiles (msec): 00:26:18.710 | 1.00th=[ 37], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 41], 00:26:18.710 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 47], 00:26:18.710 | 70.00th=[ 60], 80.00th=[ 77], 90.00th=[ 82], 95.00th=[ 97], 00:26:18.710 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 155], 99.95th=[ 161], 00:26:18.710 | 99.99th=[ 165] 00:26:18.710 bw ( KiB/s): min=160256, max=396800, per=9.43%, avg=298816.55, stdev=91974.78, samples=20 00:26:18.710 iops : min= 626, max= 1550, avg=1167.25, stdev=359.27, samples=20 00:26:18.710 lat (msec) : 20=0.13%, 50=60.31%, 100=37.70%, 250=1.86% 00:26:18.710 cpu : usr=2.47%, sys=4.58%, ctx=2876, majf=0, minf=1 00:26:18.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:18.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.710 issued rwts: total=0,11734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.710 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.710 job8: (groupid=0, jobs=1): err= 0: pid=3433359: Wed Nov 27 05:43:14 2024 00:26:18.710 write: IOPS=1342, BW=336MiB/s (352MB/s)(3368MiB/10033msec); 0 zone resets 00:26:18.710 slat (usec): min=21, max=14769, avg=737.85, stdev=1353.34 00:26:18.710 clat (usec): min=17153, max=89739, avg=46911.30, stdev=12343.53 00:26:18.710 lat (usec): min=17309, max=89812, avg=47649.14, stdev=12500.27 00:26:18.710 clat percentiles (usec): 00:26:18.710 | 1.00th=[19530], 5.00th=[36963], 10.00th=[38536], 20.00th=[40109], 00:26:18.710 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:26:18.710 | 70.00th=[54789], 80.00th=[60556], 90.00th=[63177], 95.00th=[65274], 00:26:18.710 | 99.00th=[83362], 99.50th=[83362], 99.90th=[85459], 99.95th=[85459], 00:26:18.710 | 99.99th=[88605] 00:26:18.710 bw ( KiB/s): min=217088, max=535552, per=10.83%, avg=343244.80, stdev=82262.43, samples=20 00:26:18.710 iops : min= 848, max= 2092, avg=1340.80, stdev=321.34, samples=20 00:26:18.710 lat (msec) : 20=1.54%, 50=67.69%, 100=30.77% 00:26:18.710 cpu : usr=2.79%, sys=5.41%, ctx=3311, majf=0, minf=1 00:26:18.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:18.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.710 issued rwts: total=0,13471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.710 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.710 job9: (groupid=0, jobs=1): err= 0: pid=3433371: Wed Nov 27 05:43:14 2024 00:26:18.710 write: IOPS=1890, BW=473MiB/s (496MB/s)(4733MiB/10015msec); 0 zone resets 00:26:18.710 slat (usec): min=18, max=15362, avg=525.35, stdev=1151.63 00:26:18.710 clat (usec): min=6286, max=93192, avg=33322.18, stdev=17220.32 00:26:18.710 lat (usec): min=6321, max=94622, avg=33847.53, stdev=17487.69 00:26:18.710 clat percentiles (usec): 00:26:18.710 | 1.00th=[18744], 5.00th=[19268], 10.00th=[19530], 20.00th=[20055], 00:26:18.710 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21627], 60.00th=[39584], 00:26:18.710 | 70.00th=[41157], 80.00th=[42206], 90.00th=[59507], 95.00th=[77071], 00:26:18.710 | 99.00th=[82314], 99.50th=[84411], 99.90th=[88605], 99.95th=[90702], 00:26:18.710 | 99.99th=[92799] 00:26:18.710 bw ( KiB/s): min=197120, max=789504, per=15.24%, avg=483020.80, stdev=230592.47, samples=20 00:26:18.710 iops : min= 770, max= 3084, avg=1886.80, stdev=900.75, samples=20 00:26:18.710 lat (msec) : 10=0.04%, 20=16.15%, 50=71.15%, 100=12.66% 00:26:18.710 cpu : usr=3.44%, sys=5.22%, ctx=4109, majf=0, minf=1 00:26:18.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:18.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.710 issued rwts: total=0,18931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.710 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.710 job10: (groupid=0, jobs=1): err= 0: pid=3433379: Wed Nov 27 05:43:14 2024 00:26:18.710 write: IOPS=1086, BW=272MiB/s (285MB/s)(2737MiB/10075msec); 0 zone resets 00:26:18.710 slat (usec): min=23, max=23002, avg=902.35, stdev=2042.40 00:26:18.710 clat (msec): min=14, max=173, avg=57.98, stdev=20.65 00:26:18.710 lat (msec): min=14, max=173, avg=58.88, stdev=20.99 00:26:18.710 clat percentiles (msec): 00:26:18.710 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:26:18.710 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 61], 00:26:18.710 | 70.00th=[ 78], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 97], 00:26:18.710 | 99.00th=[ 105], 99.50th=[ 113], 99.90th=[ 155], 99.95th=[ 167], 00:26:18.710 | 99.99th=[ 169] 00:26:18.710 bw ( KiB/s): min=159232, max=394240, per=8.79%, avg=278584.55, stdev=92874.59, samples=20 00:26:18.710 iops : min= 622, max= 1540, avg=1088.20, stdev=362.81, samples=20 00:26:18.710 lat (msec) : 20=0.15%, 50=54.29%, 100=43.52%, 250=2.04% 00:26:18.710 cpu : usr=2.68%, sys=4.09%, ctx=2695, majf=0, minf=1 00:26:18.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:18.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:18.710 issued rwts: total=0,10946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.710 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:18.710 00:26:18.710 Run status group 0 (all jobs): 00:26:18.710 WRITE: bw=3094MiB/s (3245MB/s), 218MiB/s-473MiB/s (228MB/s-496MB/s), io=30.5GiB (32.7GB), run=10015-10078msec 00:26:18.710 00:26:18.710 Disk stats (read/write): 00:26:18.710 nvme0n1: ios=49/17563, merge=0/0, ticks=9/1216055, in_queue=1216064, util=96.62% 00:26:18.710 nvme10n1: ios=0/25040, merge=0/0, ticks=0/1218327, in_queue=1218327, util=96.77% 00:26:18.710 nvme1n1: ios=0/17235, merge=0/0, ticks=0/1212475, in_queue=1212475, util=97.12% 00:26:18.710 nvme2n1: ios=0/23167, merge=0/0, ticks=0/1214821, in_queue=1214821, util=97.30% 00:26:18.710 nvme3n1: ios=0/17858, merge=0/0, ticks=0/1216359, in_queue=1216359, util=97.42% 00:26:18.710 nvme4n1: ios=0/17932, merge=0/0, ticks=0/1213831, in_queue=1213831, util=97.80% 00:26:18.710 nvme5n1: ios=0/17918, merge=0/0, ticks=0/1214505, in_queue=1214505, util=97.96% 00:26:18.710 nvme6n1: ios=0/23165, merge=0/0, ticks=0/1214459, in_queue=1214459, util=98.11% 00:26:18.710 nvme7n1: ios=0/26398, merge=0/0, ticks=0/1218871, in_queue=1218871, util=98.58% 00:26:18.710 nvme8n1: ios=0/36912, merge=0/0, ticks=0/1224951, in_queue=1224951, util=98.82% 00:26:18.710 nvme9n1: ios=0/21598, merge=0/0, ticks=0/1214033, in_queue=1214033, util=98.97% 00:26:18.710 05:43:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:18.710 05:43:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:18.710 05:43:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.710 05:43:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:19.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.278 05:43:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:20.214 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.214 05:43:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:21.150 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.150 05:43:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:22.527 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.527 05:43:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:23.465 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.465 05:43:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:24.402 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.402 05:43:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:25.337 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.337 05:43:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:26.271 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.271 05:43:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:27.206 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:27.206 05:43:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:28.141 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.141 05:43:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:29.075 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:29.075 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:29.075 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:29.075 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:29.075 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:29.335 rmmod nvme_rdma 00:26:29.335 rmmod nvme_fabrics 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3425144 ']' 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3425144 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3425144 ']' 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3425144 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3425144 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3425144' 00:26:29.335 killing process with pid 3425144 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3425144 00:26:29.335 05:43:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3425144 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:33.529 00:26:33.529 real 1m20.782s 00:26:33.529 user 5m7.748s 00:26:33.529 sys 0m20.647s 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.529 ************************************ 00:26:33.529 END TEST nvmf_multiconnection 00:26:33.529 ************************************ 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:33.529 ************************************ 00:26:33.529 START TEST nvmf_initiator_timeout 00:26:33.529 ************************************ 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:26:33.529 * Looking for test storage... 00:26:33.529 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:33.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.529 --rc genhtml_branch_coverage=1 00:26:33.529 --rc genhtml_function_coverage=1 00:26:33.529 --rc genhtml_legend=1 00:26:33.529 --rc geninfo_all_blocks=1 00:26:33.529 --rc geninfo_unexecuted_blocks=1 00:26:33.529 00:26:33.529 ' 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:33.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.529 --rc genhtml_branch_coverage=1 00:26:33.529 --rc genhtml_function_coverage=1 00:26:33.529 --rc genhtml_legend=1 00:26:33.529 --rc geninfo_all_blocks=1 00:26:33.529 --rc geninfo_unexecuted_blocks=1 00:26:33.529 00:26:33.529 ' 00:26:33.529 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:33.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.530 --rc genhtml_branch_coverage=1 00:26:33.530 --rc genhtml_function_coverage=1 00:26:33.530 --rc genhtml_legend=1 00:26:33.530 --rc geninfo_all_blocks=1 00:26:33.530 --rc geninfo_unexecuted_blocks=1 00:26:33.530 00:26:33.530 ' 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:33.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.530 --rc genhtml_branch_coverage=1 00:26:33.530 --rc genhtml_function_coverage=1 00:26:33.530 --rc genhtml_legend=1 00:26:33.530 --rc geninfo_all_blocks=1 00:26:33.530 --rc geninfo_unexecuted_blocks=1 00:26:33.530 00:26:33.530 ' 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.530 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.530 05:43:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.687 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.687 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:41.687 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:41.687 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:41.687 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:41.687 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:41.688 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:41.688 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:41.688 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:41.688 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:41.688 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:41.689 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:41.689 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:41.689 altname enp217s0f0np0 00:26:41.689 altname ens818f0np0 00:26:41.689 inet 192.168.100.8/24 scope global mlx_0_0 00:26:41.689 valid_lft forever preferred_lft forever 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:41.689 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:41.689 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:41.689 altname enp217s0f1np1 00:26:41.689 altname ens818f1np1 00:26:41.689 inet 192.168.100.9/24 scope global mlx_0_1 00:26:41.689 valid_lft forever preferred_lft forever 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:41.689 05:43:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:41.689 192.168.100.9' 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:41.689 192.168.100.9' 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:41.689 192.168.100.9' 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3441329 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3441329 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3441329 ']' 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.689 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.689 [2024-11-27 05:43:38.146450] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:26:41.689 [2024-11-27 05:43:38.146549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.951 [2024-11-27 05:43:38.299592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.951 [2024-11-27 05:43:38.400652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.951 [2024-11-27 05:43:38.400720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.951 [2024-11-27 05:43:38.400745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.951 [2024-11-27 05:43:38.400758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.951 [2024-11-27 05:43:38.400767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.951 [2024-11-27 05:43:38.403315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.951 [2024-11-27 05:43:38.403392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.951 [2024-11-27 05:43:38.403453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.951 [2024-11-27 05:43:38.403461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.521 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.521 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:42.521 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:42.521 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:42.521 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.521 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.521 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:42.521 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:42.521 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.521 05:43:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.521 Malloc0 00:26:42.521 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.521 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:42.521 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.521 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.521 Delay0 00:26:42.521 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.521 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:42.521 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.521 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.781 [2024-11-27 05:43:39.128778] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029bc0/0x7ff6f9fbd940) succeed. 00:26:42.781 [2024-11-27 05:43:39.138638] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029d40/0x7ff6f9f79940) succeed. 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:43.041 [2024-11-27 05:43:39.427219] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.041 05:43:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:26:43.980 05:43:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:43.980 05:43:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:43.980 05:43:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:43.980 05:43:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:43.980 05:43:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:45.887 05:43:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:45.887 05:43:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:45.887 05:43:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:45.887 05:43:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:45.887 05:43:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:45.887 05:43:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:45.887 05:43:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3441985 00:26:45.887 05:43:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:45.887 05:43:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:45.887 [global] 00:26:45.887 thread=1 00:26:45.887 invalidate=1 00:26:45.887 rw=write 00:26:45.887 time_based=1 00:26:45.887 runtime=60 00:26:45.887 ioengine=libaio 00:26:45.887 direct=1 00:26:45.887 bs=4096 00:26:45.887 iodepth=1 00:26:45.887 norandommap=0 00:26:45.887 numjobs=1 00:26:45.887 00:26:45.887 verify_dump=1 00:26:45.887 verify_backlog=512 00:26:45.887 verify_state_save=0 00:26:45.887 do_verify=1 00:26:45.887 verify=crc32c-intel 00:26:46.163 [job0] 00:26:46.163 filename=/dev/nvme0n1 00:26:46.163 Could not set queue depth (nvme0n1) 00:26:46.424 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:46.424 fio-3.35 00:26:46.424 Starting 1 thread 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.951 true 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.951 true 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.951 true 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.951 true 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.951 05:43:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.227 true 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.227 true 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.227 true 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.227 true 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:52.227 05:43:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3441985 00:27:48.570 00:27:48.570 job0: (groupid=0, jobs=1): err= 0: pid=3442193: Wed Nov 27 05:44:42 2024 00:27:48.570 read: IOPS=1175, BW=4703KiB/s (4816kB/s)(276MiB/60000msec) 00:27:48.570 slat (usec): min=8, max=7542, avg= 9.28, stdev=28.38 00:27:48.570 clat (usec): min=82, max=42555k, avg=715.08, stdev=160217.18 00:27:48.570 lat (usec): min=100, max=42555k, avg=724.37, stdev=160217.18 00:27:48.570 clat percentiles (usec): 00:27:48.570 | 1.00th=[ 98], 5.00th=[ 101], 10.00th=[ 103], 20.00th=[ 105], 00:27:48.570 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 114], 00:27:48.570 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 122], 95.00th=[ 125], 00:27:48.570 | 99.00th=[ 133], 99.50th=[ 135], 99.90th=[ 141], 99.95th=[ 147], 00:27:48.570 | 99.99th=[ 190] 00:27:48.570 write: IOPS=1177, BW=4710KiB/s (4823kB/s)(276MiB/60000msec); 0 zone resets 00:27:48.570 slat (usec): min=8, max=987, avg=11.95, stdev= 4.48 00:27:48.570 clat (usec): min=47, max=762, avg=108.77, stdev= 8.45 00:27:48.570 lat (usec): min=100, max=1035, avg=120.73, stdev= 9.73 00:27:48.570 clat percentiles (usec): 00:27:48.570 | 1.00th=[ 95], 5.00th=[ 98], 10.00th=[ 100], 20.00th=[ 102], 00:27:48.570 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:27:48.570 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 119], 95.00th=[ 122], 00:27:48.570 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 141], 99.95th=[ 163], 00:27:48.570 | 99.99th=[ 330] 00:27:48.570 bw ( KiB/s): min= 1024, max=17168, per=100.00%, avg=15797.94, stdev=2674.06, samples=35 00:27:48.570 iops : min= 256, max= 4292, avg=3949.49, stdev=668.52, samples=35 00:27:48.570 lat (usec) : 50=0.01%, 100=7.23%, 250=92.76%, 500=0.01%, 1000=0.01% 00:27:48.570 lat (msec) : >=2000=0.01% 00:27:48.570 cpu : usr=2.02%, sys=3.04%, ctx=141212, majf=0, minf=142 00:27:48.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:48.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.570 issued rwts: total=70548,70656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:48.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:48.570 00:27:48.570 Run status group 0 (all jobs): 00:27:48.570 READ: bw=4703KiB/s (4816kB/s), 4703KiB/s-4703KiB/s (4816kB/s-4816kB/s), io=276MiB (289MB), run=60000-60000msec 00:27:48.571 WRITE: bw=4710KiB/s (4823kB/s), 4710KiB/s-4710KiB/s (4823kB/s-4823kB/s), io=276MiB (289MB), run=60000-60000msec 00:27:48.571 00:27:48.571 Disk stats (read/write): 00:27:48.571 nvme0n1: ios=70441/70256, merge=0/0, ticks=7195/7155, in_queue=14350, util=99.55% 00:27:48.571 05:44:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:48.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:48.571 nvmf hotplug test: fio successful as expected 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:48.571 rmmod nvme_rdma 00:27:48.571 rmmod nvme_fabrics 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3441329 ']' 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3441329 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3441329 ']' 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3441329 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.571 05:44:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3441329 00:27:48.571 05:44:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:48.571 05:44:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:48.571 05:44:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3441329' 00:27:48.571 killing process with pid 3441329 00:27:48.571 05:44:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3441329 00:27:48.571 05:44:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3441329 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:49.503 00:27:49.503 real 1m16.373s 00:27:49.503 user 4m39.807s 00:27:49.503 sys 0m9.167s 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:49.503 ************************************ 00:27:49.503 END TEST nvmf_initiator_timeout 00:27:49.503 ************************************ 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:49.503 ************************************ 00:27:49.503 START TEST nvmf_srq_overwhelm 00:27:49.503 ************************************ 00:27:49.503 05:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:27:49.503 * Looking for test storage... 00:27:49.503 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:27:49.503 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:49.503 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lcov --version 00:27:49.503 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:49.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.761 --rc genhtml_branch_coverage=1 00:27:49.761 --rc genhtml_function_coverage=1 00:27:49.761 --rc genhtml_legend=1 00:27:49.761 --rc geninfo_all_blocks=1 00:27:49.761 --rc geninfo_unexecuted_blocks=1 00:27:49.761 00:27:49.761 ' 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:49.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.761 --rc genhtml_branch_coverage=1 00:27:49.761 --rc genhtml_function_coverage=1 00:27:49.761 --rc genhtml_legend=1 00:27:49.761 --rc geninfo_all_blocks=1 00:27:49.761 --rc geninfo_unexecuted_blocks=1 00:27:49.761 00:27:49.761 ' 00:27:49.761 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:49.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.761 --rc genhtml_branch_coverage=1 00:27:49.761 --rc genhtml_function_coverage=1 00:27:49.761 --rc genhtml_legend=1 00:27:49.761 --rc geninfo_all_blocks=1 00:27:49.761 --rc geninfo_unexecuted_blocks=1 00:27:49.761 00:27:49.761 ' 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:49.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.762 --rc genhtml_branch_coverage=1 00:27:49.762 --rc genhtml_function_coverage=1 00:27:49.762 --rc genhtml_legend=1 00:27:49.762 --rc geninfo_all_blocks=1 00:27:49.762 --rc geninfo_unexecuted_blocks=1 00:27:49.762 00:27:49.762 ' 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:49.762 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:27:49.762 05:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:57.870 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:57.870 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:57.870 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:57.870 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:27:57.870 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:57.871 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:57.871 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:57.871 altname enp217s0f0np0 00:27:57.871 altname ens818f0np0 00:27:57.871 inet 192.168.100.8/24 scope global mlx_0_0 00:27:57.871 valid_lft forever preferred_lft forever 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:57.871 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:57.871 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:57.871 altname enp217s0f1np1 00:27:57.871 altname ens818f1np1 00:27:57.871 inet 192.168.100.9/24 scope global mlx_0_1 00:27:57.871 valid_lft forever preferred_lft forever 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:57.871 192.168.100.9' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:57.871 192.168.100.9' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:57.871 192.168.100.9' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=3456483 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 3456483 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 3456483 ']' 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:57.871 05:44:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:58.129 [2024-11-27 05:44:54.502287] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:58.130 [2024-11-27 05:44:54.502396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.130 [2024-11-27 05:44:54.656778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.388 [2024-11-27 05:44:54.762930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.388 [2024-11-27 05:44:54.762980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.388 [2024-11-27 05:44:54.762993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.388 [2024-11-27 05:44:54.763007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.388 [2024-11-27 05:44:54.763017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.388 [2024-11-27 05:44:54.765431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.388 [2024-11-27 05:44:54.765507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.388 [2024-11-27 05:44:54.765527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.388 [2024-11-27 05:44:54.765534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:58.954 [2024-11-27 05:44:55.392116] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f52de50f940) succeed. 00:27:58.954 [2024-11-27 05:44:55.402177] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f52ddbbd940) succeed. 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.954 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:59.212 Malloc0 00:27:59.212 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.212 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:27:59.212 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.212 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:59.212 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.212 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:59.212 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.212 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:59.212 [2024-11-27 05:44:55.595793] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:59.212 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.212 05:44:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:28:00.147 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:28:00.147 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:00.148 Malloc1 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.148 05:44:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:01.521 Malloc2 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.521 05:44:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:02.454 Malloc3 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.454 05:44:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.386 05:44:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:03.644 Malloc4 00:28:03.644 05:45:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.644 05:45:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:28:03.644 05:45:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.644 05:45:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:03.644 05:45:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.644 05:45:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:28:03.644 05:45:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.644 05:45:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:03.644 05:45:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.644 05:45:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:04.579 Malloc5 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.579 05:45:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:28:05.974 05:45:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:28:05.974 05:45:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:28:05.974 05:45:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:28:05.974 05:45:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:05.974 05:45:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:28:05.974 05:45:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:05.974 05:45:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:28:05.974 05:45:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:28:05.974 [global] 00:28:05.974 thread=1 00:28:05.974 invalidate=1 00:28:05.974 rw=read 00:28:05.974 time_based=1 00:28:05.974 runtime=10 00:28:05.974 ioengine=libaio 00:28:05.974 direct=1 00:28:05.974 bs=1048576 00:28:05.974 iodepth=128 00:28:05.974 norandommap=1 00:28:05.974 numjobs=13 00:28:05.974 00:28:05.974 [job0] 00:28:05.974 filename=/dev/nvme0n1 00:28:05.974 [job1] 00:28:05.974 filename=/dev/nvme1n1 00:28:05.974 [job2] 00:28:05.974 filename=/dev/nvme2n1 00:28:05.974 [job3] 00:28:05.974 filename=/dev/nvme3n1 00:28:05.974 [job4] 00:28:05.974 filename=/dev/nvme4n1 00:28:05.974 [job5] 00:28:05.974 filename=/dev/nvme5n1 00:28:05.974 Could not set queue depth (nvme0n1) 00:28:05.974 Could not set queue depth (nvme1n1) 00:28:05.974 Could not set queue depth (nvme2n1) 00:28:05.974 Could not set queue depth (nvme3n1) 00:28:05.974 Could not set queue depth (nvme4n1) 00:28:05.974 Could not set queue depth (nvme5n1) 00:28:06.241 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:06.241 ... 00:28:06.241 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:06.241 ... 00:28:06.241 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:06.241 ... 00:28:06.241 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:06.241 ... 00:28:06.241 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:06.241 ... 00:28:06.241 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:06.241 ... 00:28:06.241 fio-3.35 00:28:06.241 Starting 78 threads 00:28:18.440 00:28:18.440 job0: (groupid=0, jobs=1): err= 0: pid=3458192: Wed Nov 27 05:45:13 2024 00:28:18.440 read: IOPS=1, BW=1436KiB/s (1471kB/s)(15.0MiB/10695msec) 00:28:18.440 slat (msec): min=4, max=2110, avg=708.18, stdev=998.54 00:28:18.440 clat (msec): min=71, max=10689, avg=5855.30, stdev=3458.94 00:28:18.440 lat (msec): min=2117, max=10693, avg=6563.48, stdev=3272.55 00:28:18.440 clat percentiles (msec): 00:28:18.440 | 1.00th=[ 71], 5.00th=[ 71], 10.00th=[ 2123], 20.00th=[ 2165], 00:28:18.440 | 30.00th=[ 4212], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6409], 00:28:18.440 | 70.00th=[ 8557], 80.00th=[ 8557], 90.00th=[10671], 95.00th=[10671], 00:28:18.440 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:28:18.440 | 99.99th=[10671] 00:28:18.440 lat (msec) : 100=6.67%, >=2000=93.33% 00:28:18.440 cpu : usr=0.02%, sys=0.13%, ctx=54, majf=0, minf=3841 00:28:18.440 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:18.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.440 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.440 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.440 job0: (groupid=0, jobs=1): err= 0: pid=3458193: Wed Nov 27 05:45:13 2024 00:28:18.440 read: IOPS=19, BW=19.0MiB/s (19.9MB/s)(203MiB/10672msec) 00:28:18.440 slat (usec): min=97, max=2087.0k, avg=52205.82, stdev=283329.66 00:28:18.440 clat (msec): min=72, max=8598, avg=4617.07, stdev=1888.41 00:28:18.440 lat (msec): min=706, max=8606, avg=4669.28, stdev=1873.28 00:28:18.440 clat percentiles (msec): 00:28:18.440 | 1.00th=[ 709], 5.00th=[ 793], 10.00th=[ 911], 20.00th=[ 2198], 00:28:18.440 | 30.00th=[ 4245], 40.00th=[ 4866], 50.00th=[ 5537], 60.00th=[ 5604], 00:28:18.440 | 70.00th=[ 5738], 80.00th=[ 5873], 90.00th=[ 6007], 95.00th=[ 6074], 00:28:18.440 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8658], 99.95th=[ 8658], 00:28:18.440 | 99.99th=[ 8658] 00:28:18.440 bw ( KiB/s): min= 1610, max=65536, per=0.75%, avg=25868.33, stdev=25288.64, samples=6 00:28:18.440 iops : min= 1, max= 64, avg=25.17, stdev=24.81, samples=6 00:28:18.440 lat (msec) : 100=0.49%, 750=2.96%, 1000=6.90%, 2000=6.90%, >=2000=82.76% 00:28:18.440 cpu : usr=0.00%, sys=0.83%, ctx=229, majf=0, minf=32769 00:28:18.440 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.9%, 32=15.8%, >=64=69.0% 00:28:18.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.440 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:28:18.440 issued rwts: total=203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.440 job0: (groupid=0, jobs=1): err= 0: pid=3458194: Wed Nov 27 05:45:13 2024 00:28:18.440 read: IOPS=20, BW=20.9MiB/s (22.0MB/s)(223MiB/10647msec) 00:28:18.441 slat (usec): min=36, max=2168.1k, avg=47418.88, stdev=263634.82 00:28:18.441 clat (msec): min=71, max=10631, avg=5661.01, stdev=3411.98 00:28:18.441 lat (msec): min=1261, max=10646, avg=5708.43, stdev=3396.57 00:28:18.441 clat percentiles (msec): 00:28:18.441 | 1.00th=[ 1267], 5.00th=[ 1334], 10.00th=[ 1401], 20.00th=[ 1452], 00:28:18.441 | 30.00th=[ 1921], 40.00th=[ 3540], 50.00th=[ 6342], 60.00th=[ 8658], 00:28:18.441 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9463], 00:28:18.441 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[10671], 99.95th=[10671], 00:28:18.441 | 99.99th=[10671] 00:28:18.441 bw ( KiB/s): min= 2052, max=98304, per=0.81%, avg=28087.43, stdev=33437.21, samples=7 00:28:18.441 iops : min= 2, max= 96, avg=27.43, stdev=32.65, samples=7 00:28:18.441 lat (msec) : 100=0.45%, 2000=31.39%, >=2000=68.16% 00:28:18.441 cpu : usr=0.00%, sys=0.98%, ctx=418, majf=0, minf=32769 00:28:18.441 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.3%, >=64=71.7% 00:28:18.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.441 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:28:18.441 issued rwts: total=223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.441 job0: (groupid=0, jobs=1): err= 0: pid=3458195: Wed Nov 27 05:45:13 2024 00:28:18.441 read: IOPS=3, BW=3934KiB/s (4028kB/s)(41.0MiB/10673msec) 00:28:18.441 slat (usec): min=484, max=2192.8k, avg=258570.37, stdev=676948.71 00:28:18.441 clat (msec): min=71, max=10637, avg=5005.09, stdev=2379.09 00:28:18.441 lat (msec): min=2105, max=10672, avg=5263.66, stdev=2405.43 00:28:18.441 clat percentiles (msec): 00:28:18.441 | 1.00th=[ 72], 5.00th=[ 4077], 10.00th=[ 4077], 20.00th=[ 4111], 00:28:18.441 | 30.00th=[ 4144], 40.00th=[ 4144], 50.00th=[ 4178], 60.00th=[ 4178], 00:28:18.441 | 70.00th=[ 4212], 80.00th=[ 4212], 90.00th=[10537], 95.00th=[10537], 00:28:18.441 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:28:18.441 | 99.99th=[10671] 00:28:18.441 lat (msec) : 100=2.44%, >=2000=97.56% 00:28:18.441 cpu : usr=0.01%, sys=0.20%, ctx=111, majf=0, minf=10497 00:28:18.441 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:28:18.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.441 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:18.441 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.441 job0: (groupid=0, jobs=1): err= 0: pid=3458196: Wed Nov 27 05:45:13 2024 00:28:18.441 read: IOPS=4, BW=4320KiB/s (4424kB/s)(45.0MiB/10666msec) 00:28:18.441 slat (usec): min=946, max=2105.8k, avg=235731.81, stdev=646072.29 00:28:18.441 clat (msec): min=57, max=10664, avg=6298.93, stdev=3931.98 00:28:18.441 lat (msec): min=2043, max=10665, avg=6534.66, stdev=3866.72 00:28:18.441 clat percentiles (msec): 00:28:18.441 | 1.00th=[ 58], 5.00th=[ 2039], 10.00th=[ 2056], 20.00th=[ 2106], 00:28:18.441 | 30.00th=[ 2165], 40.00th=[ 2198], 50.00th=[ 6409], 60.00th=[ 8557], 00:28:18.441 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:28:18.441 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:28:18.441 | 99.99th=[10671] 00:28:18.441 lat (msec) : 100=2.22%, >=2000=97.78% 00:28:18.441 cpu : usr=0.00%, sys=0.45%, ctx=65, majf=0, minf=11521 00:28:18.441 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:28:18.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.441 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:18.441 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.441 job0: (groupid=0, jobs=1): err= 0: pid=3458197: Wed Nov 27 05:45:13 2024 00:28:18.441 read: IOPS=46, BW=46.6MiB/s (48.9MB/s)(502MiB/10774msec) 00:28:18.441 slat (usec): min=41, max=4179.3k, avg=21311.47, stdev=225222.00 00:28:18.441 clat (msec): min=72, max=7138, avg=1436.29, stdev=1604.83 00:28:18.441 lat (msec): min=242, max=7152, avg=1457.60, stdev=1632.15 00:28:18.441 clat percentiles (msec): 00:28:18.441 | 1.00th=[ 266], 5.00th=[ 275], 10.00th=[ 334], 20.00th=[ 430], 00:28:18.441 | 30.00th=[ 617], 40.00th=[ 827], 50.00th=[ 852], 60.00th=[ 860], 00:28:18.441 | 70.00th=[ 2005], 80.00th=[ 2265], 90.00th=[ 2400], 95.00th=[ 7013], 00:28:18.441 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7148], 99.95th=[ 7148], 00:28:18.441 | 99.99th=[ 7148] 00:28:18.441 bw ( KiB/s): min= 1380, max=350208, per=4.45%, avg=153466.40, stdev=126588.76, samples=5 00:28:18.441 iops : min= 1, max= 342, avg=149.80, stdev=123.73, samples=5 00:28:18.441 lat (msec) : 100=0.20%, 250=0.80%, 500=24.70%, 750=9.96%, 1000=32.87% 00:28:18.441 lat (msec) : 2000=0.80%, >=2000=30.68% 00:28:18.441 cpu : usr=0.02%, sys=1.58%, ctx=408, majf=0, minf=32769 00:28:18.441 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.5% 00:28:18.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.441 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:18.441 issued rwts: total=502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.441 job0: (groupid=0, jobs=1): err= 0: pid=3458198: Wed Nov 27 05:45:13 2024 00:28:18.441 read: IOPS=47, BW=47.1MiB/s (49.4MB/s)(506MiB/10747msec) 00:28:18.441 slat (usec): min=32, max=3927.4k, avg=21124.56, stdev=196765.36 00:28:18.441 clat (msec): min=53, max=4823, avg=2457.27, stdev=1569.31 00:28:18.441 lat (msec): min=696, max=4828, avg=2478.39, stdev=1567.35 00:28:18.441 clat percentiles (msec): 00:28:18.441 | 1.00th=[ 709], 5.00th=[ 768], 10.00th=[ 827], 20.00th=[ 869], 00:28:18.441 | 30.00th=[ 885], 40.00th=[ 927], 50.00th=[ 3004], 60.00th=[ 3339], 00:28:18.441 | 70.00th=[ 3842], 80.00th=[ 4144], 90.00th=[ 4530], 95.00th=[ 4665], 00:28:18.441 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:28:18.441 | 99.99th=[ 4799] 00:28:18.441 bw ( KiB/s): min= 1434, max=174080, per=2.50%, avg=86174.78, stdev=66551.41, samples=9 00:28:18.441 iops : min= 1, max= 170, avg=84.00, stdev=65.21, samples=9 00:28:18.441 lat (msec) : 100=0.20%, 750=4.55%, 1000=41.11%, 2000=0.20%, >=2000=53.95% 00:28:18.441 cpu : usr=0.04%, sys=1.50%, ctx=739, majf=0, minf=32769 00:28:18.441 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.5% 00:28:18.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.441 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:18.441 issued rwts: total=506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.441 job0: (groupid=0, jobs=1): err= 0: pid=3458199: Wed Nov 27 05:45:13 2024 00:28:18.441 read: IOPS=34, BW=34.8MiB/s (36.5MB/s)(374MiB/10733msec) 00:28:18.441 slat (usec): min=33, max=2160.7k, avg=26866.30, stdev=190602.87 00:28:18.441 clat (msec): min=682, max=8250, avg=3447.88, stdev=3055.45 00:28:18.441 lat (msec): min=764, max=8257, avg=3474.75, stdev=3062.57 00:28:18.441 clat percentiles (msec): 00:28:18.441 | 1.00th=[ 768], 5.00th=[ 852], 10.00th=[ 852], 20.00th=[ 936], 00:28:18.441 | 30.00th=[ 1099], 40.00th=[ 1401], 50.00th=[ 1636], 60.00th=[ 1754], 00:28:18.441 | 70.00th=[ 7483], 80.00th=[ 7617], 90.00th=[ 7752], 95.00th=[ 8087], 00:28:18.441 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:28:18.441 | 99.99th=[ 8221] 00:28:18.441 bw ( KiB/s): min= 1992, max=159744, per=1.63%, avg=56200.00, stdev=62176.57, samples=9 00:28:18.441 iops : min= 1, max= 156, avg=54.78, stdev=60.82, samples=9 00:28:18.441 lat (msec) : 750=0.27%, 1000=25.94%, 2000=38.50%, >=2000=35.29% 00:28:18.441 cpu : usr=0.00%, sys=1.37%, ctx=656, majf=0, minf=32769 00:28:18.441 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.6%, >=64=83.2% 00:28:18.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.441 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:18.442 issued rwts: total=374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.442 job0: (groupid=0, jobs=1): err= 0: pid=3458200: Wed Nov 27 05:45:13 2024 00:28:18.442 read: IOPS=22, BW=22.5MiB/s (23.6MB/s)(241MiB/10703msec) 00:28:18.442 slat (usec): min=523, max=2237.9k, avg=44183.34, stdev=271554.87 00:28:18.442 clat (msec): min=53, max=9490, avg=5224.99, stdev=4020.43 00:28:18.442 lat (msec): min=841, max=9496, avg=5269.17, stdev=4010.79 00:28:18.442 clat percentiles (msec): 00:28:18.442 | 1.00th=[ 835], 5.00th=[ 844], 10.00th=[ 852], 20.00th=[ 911], 00:28:18.442 | 30.00th=[ 961], 40.00th=[ 1062], 50.00th=[ 8658], 60.00th=[ 8926], 00:28:18.442 | 70.00th=[ 9060], 80.00th=[ 9194], 90.00th=[ 9329], 95.00th=[ 9463], 00:28:18.442 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:28:18.442 | 99.99th=[ 9463] 00:28:18.442 bw ( KiB/s): min= 1610, max=135168, per=0.97%, avg=33290.57, stdev=50706.14, samples=7 00:28:18.442 iops : min= 1, max= 132, avg=32.43, stdev=49.58, samples=7 00:28:18.442 lat (msec) : 100=0.41%, 1000=36.10%, 2000=8.30%, >=2000=55.19% 00:28:18.442 cpu : usr=0.01%, sys=1.07%, ctx=508, majf=0, minf=32769 00:28:18.442 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.6%, 32=13.3%, >=64=73.9% 00:28:18.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.442 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:28:18.442 issued rwts: total=241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.442 job0: (groupid=0, jobs=1): err= 0: pid=3458201: Wed Nov 27 05:45:13 2024 00:28:18.442 read: IOPS=30, BW=30.0MiB/s (31.5MB/s)(321MiB/10686msec) 00:28:18.442 slat (usec): min=36, max=2182.7k, avg=33092.08, stdev=216746.38 00:28:18.442 clat (msec): min=61, max=8653, avg=3860.02, stdev=2790.09 00:28:18.442 lat (msec): min=1010, max=8657, avg=3893.12, stdev=2795.11 00:28:18.442 clat percentiles (msec): 00:28:18.442 | 1.00th=[ 1011], 5.00th=[ 1045], 10.00th=[ 1070], 20.00th=[ 1284], 00:28:18.442 | 30.00th=[ 1653], 40.00th=[ 1770], 50.00th=[ 1871], 60.00th=[ 4279], 00:28:18.442 | 70.00th=[ 7148], 80.00th=[ 7349], 90.00th=[ 7483], 95.00th=[ 7550], 00:28:18.442 | 99.00th=[ 7617], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:28:18.442 | 99.99th=[ 8658] 00:28:18.442 bw ( KiB/s): min= 1610, max=159744, per=1.44%, avg=49609.25, stdev=59182.21, samples=8 00:28:18.442 iops : min= 1, max= 156, avg=48.38, stdev=57.86, samples=8 00:28:18.442 lat (msec) : 100=0.31%, 1000=0.31%, 2000=53.27%, >=2000=46.11% 00:28:18.442 cpu : usr=0.01%, sys=0.97%, ctx=585, majf=0, minf=32769 00:28:18.442 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.4% 00:28:18.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.442 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:28:18.442 issued rwts: total=321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.442 job0: (groupid=0, jobs=1): err= 0: pid=3458202: Wed Nov 27 05:45:13 2024 00:28:18.442 read: IOPS=62, BW=62.3MiB/s (65.3MB/s)(668MiB/10726msec) 00:28:18.442 slat (usec): min=34, max=2057.4k, avg=15941.45, stdev=141817.91 00:28:18.442 clat (msec): min=72, max=5056, avg=1330.02, stdev=1293.52 00:28:18.442 lat (msec): min=280, max=5068, avg=1345.96, stdev=1301.85 00:28:18.442 clat percentiles (msec): 00:28:18.442 | 1.00th=[ 279], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:28:18.442 | 30.00th=[ 342], 40.00th=[ 852], 50.00th=[ 877], 60.00th=[ 919], 00:28:18.442 | 70.00th=[ 1045], 80.00th=[ 3306], 90.00th=[ 3473], 95.00th=[ 3540], 00:28:18.442 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5067], 99.95th=[ 5067], 00:28:18.442 | 99.99th=[ 5067] 00:28:18.442 bw ( KiB/s): min= 1610, max=374784, per=4.01%, avg=138404.37, stdev=146041.55, samples=8 00:28:18.442 iops : min= 1, max= 366, avg=135.00, stdev=142.69, samples=8 00:28:18.442 lat (msec) : 100=0.15%, 500=32.78%, 750=3.59%, 1000=32.19%, 2000=7.78% 00:28:18.442 lat (msec) : >=2000=23.50% 00:28:18.442 cpu : usr=0.06%, sys=1.64%, ctx=675, majf=0, minf=32769 00:28:18.442 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:28:18.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.442 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.442 issued rwts: total=668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.442 job0: (groupid=0, jobs=1): err= 0: pid=3458203: Wed Nov 27 05:45:13 2024 00:28:18.442 read: IOPS=28, BW=28.3MiB/s (29.7MB/s)(304MiB/10732msec) 00:28:18.442 slat (usec): min=83, max=2211.9k, avg=35119.84, stdev=241376.28 00:28:18.442 clat (msec): min=53, max=9639, avg=4310.50, stdev=4058.70 00:28:18.442 lat (msec): min=720, max=9648, avg=4345.62, stdev=4059.86 00:28:18.442 clat percentiles (msec): 00:28:18.442 | 1.00th=[ 718], 5.00th=[ 726], 10.00th=[ 743], 20.00th=[ 776], 00:28:18.442 | 30.00th=[ 827], 40.00th=[ 911], 50.00th=[ 995], 60.00th=[ 8658], 00:28:18.442 | 70.00th=[ 8926], 80.00th=[ 9194], 90.00th=[ 9463], 95.00th=[ 9597], 00:28:18.442 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:28:18.442 | 99.99th=[ 9597] 00:28:18.442 bw ( KiB/s): min= 1440, max=133120, per=1.31%, avg=45236.00, stdev=57602.01, samples=8 00:28:18.442 iops : min= 1, max= 130, avg=44.12, stdev=56.30, samples=8 00:28:18.442 lat (msec) : 100=0.33%, 750=13.16%, 1000=36.84%, 2000=6.25%, >=2000=43.42% 00:28:18.442 cpu : usr=0.01%, sys=1.44%, ctx=354, majf=0, minf=32769 00:28:18.442 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.5%, >=64=79.3% 00:28:18.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.442 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:18.442 issued rwts: total=304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.442 job0: (groupid=0, jobs=1): err= 0: pid=3458204: Wed Nov 27 05:45:13 2024 00:28:18.442 read: IOPS=142, BW=143MiB/s (150MB/s)(1435MiB/10041msec) 00:28:18.442 slat (usec): min=40, max=1891.1k, avg=6965.94, stdev=50472.18 00:28:18.442 clat (msec): min=33, max=2678, avg=693.75, stdev=248.38 00:28:18.442 lat (msec): min=40, max=2746, avg=700.72, stdev=254.86 00:28:18.442 clat percentiles (msec): 00:28:18.442 | 1.00th=[ 103], 5.00th=[ 351], 10.00th=[ 456], 20.00th=[ 535], 00:28:18.442 | 30.00th=[ 684], 40.00th=[ 693], 50.00th=[ 693], 60.00th=[ 709], 00:28:18.442 | 70.00th=[ 735], 80.00th=[ 835], 90.00th=[ 911], 95.00th=[ 953], 00:28:18.442 | 99.00th=[ 986], 99.50th=[ 2668], 99.90th=[ 2668], 99.95th=[ 2668], 00:28:18.442 | 99.99th=[ 2668] 00:28:18.442 bw ( KiB/s): min=96448, max=282624, per=5.06%, avg=174532.57, stdev=43943.80, samples=14 00:28:18.442 iops : min= 94, max= 276, avg=170.43, stdev=42.94, samples=14 00:28:18.442 lat (msec) : 50=0.42%, 100=0.56%, 250=2.72%, 500=12.13%, 750=56.38% 00:28:18.442 lat (msec) : 1000=26.97%, 2000=0.07%, >=2000=0.77% 00:28:18.442 cpu : usr=0.14%, sys=2.56%, ctx=1246, majf=0, minf=32769 00:28:18.442 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:28:18.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.442 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.442 issued rwts: total=1435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.442 job1: (groupid=0, jobs=1): err= 0: pid=3458206: Wed Nov 27 05:45:13 2024 00:28:18.442 read: IOPS=4, BW=4245KiB/s (4347kB/s)(44.0MiB/10613msec) 00:28:18.442 slat (usec): min=807, max=2120.4k, avg=239476.23, stdev=649459.01 00:28:18.442 clat (msec): min=75, max=10609, avg=8035.18, stdev=3077.29 00:28:18.442 lat (msec): min=2108, max=10612, avg=8274.66, stdev=2844.66 00:28:18.442 clat percentiles (msec): 00:28:18.442 | 1.00th=[ 75], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4279], 00:28:18.442 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10402], 00:28:18.442 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10671], 00:28:18.442 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:28:18.442 | 99.99th=[10671] 00:28:18.442 lat (msec) : 100=2.27%, >=2000=97.73% 00:28:18.443 cpu : usr=0.01%, sys=0.41%, ctx=57, majf=0, minf=11265 00:28:18.443 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:28:18.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.443 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:18.443 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.443 job1: (groupid=0, jobs=1): err= 0: pid=3458207: Wed Nov 27 05:45:13 2024 00:28:18.443 read: IOPS=18, BW=18.9MiB/s (19.9MB/s)(201MiB/10611msec) 00:28:18.443 slat (usec): min=572, max=2071.1k, avg=52400.78, stdev=285681.42 00:28:18.443 clat (msec): min=76, max=9582, avg=6152.00, stdev=3381.47 00:28:18.443 lat (msec): min=1295, max=9599, avg=6204.40, stdev=3357.19 00:28:18.443 clat percentiles (msec): 00:28:18.443 | 1.00th=[ 1284], 5.00th=[ 1318], 10.00th=[ 1334], 20.00th=[ 1502], 00:28:18.443 | 30.00th=[ 3406], 40.00th=[ 5470], 50.00th=[ 8557], 60.00th=[ 8792], 00:28:18.443 | 70.00th=[ 8926], 80.00th=[ 9194], 90.00th=[ 9329], 95.00th=[ 9463], 00:28:18.443 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:28:18.443 | 99.99th=[ 9597] 00:28:18.443 bw ( KiB/s): min= 2052, max=94208, per=0.73%, avg=25255.17, stdev=34287.26, samples=6 00:28:18.443 iops : min= 2, max= 92, avg=24.50, stdev=33.56, samples=6 00:28:18.443 lat (msec) : 100=0.50%, 2000=24.88%, >=2000=74.63% 00:28:18.443 cpu : usr=0.02%, sys=1.07%, ctx=352, majf=0, minf=32769 00:28:18.443 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=15.9%, >=64=68.7% 00:28:18.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.443 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:28:18.443 issued rwts: total=201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.443 job1: (groupid=0, jobs=1): err= 0: pid=3458208: Wed Nov 27 05:45:13 2024 00:28:18.443 read: IOPS=43, BW=43.2MiB/s (45.3MB/s)(461MiB/10681msec) 00:28:18.443 slat (usec): min=41, max=2079.9k, avg=23035.45, stdev=188864.20 00:28:18.443 clat (msec): min=59, max=8998, avg=2844.55, stdev=3521.37 00:28:18.443 lat (msec): min=554, max=8999, avg=2867.59, stdev=3528.42 00:28:18.443 clat percentiles (msec): 00:28:18.443 | 1.00th=[ 558], 5.00th=[ 567], 10.00th=[ 584], 20.00th=[ 600], 00:28:18.443 | 30.00th=[ 617], 40.00th=[ 634], 50.00th=[ 642], 60.00th=[ 676], 00:28:18.443 | 70.00th=[ 760], 80.00th=[ 8490], 90.00th=[ 8792], 95.00th=[ 8926], 00:28:18.443 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:28:18.443 | 99.99th=[ 9060] 00:28:18.443 bw ( KiB/s): min= 1610, max=225280, per=2.48%, avg=85449.25, stdev=95248.42, samples=8 00:28:18.443 iops : min= 1, max= 220, avg=83.38, stdev=93.09, samples=8 00:28:18.443 lat (msec) : 100=0.22%, 750=69.20%, 1000=0.65%, >=2000=29.93% 00:28:18.443 cpu : usr=0.01%, sys=1.31%, ctx=409, majf=0, minf=32769 00:28:18.443 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.3% 00:28:18.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.443 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:18.443 issued rwts: total=461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.443 job1: (groupid=0, jobs=1): err= 0: pid=3458209: Wed Nov 27 05:45:13 2024 00:28:18.443 read: IOPS=3, BW=3923KiB/s (4017kB/s)(41.0MiB/10703msec) 00:28:18.443 slat (usec): min=975, max=2117.0k, avg=259638.07, stdev=675413.55 00:28:18.443 clat (msec): min=57, max=10701, avg=6279.03, stdev=4175.22 00:28:18.443 lat (msec): min=1991, max=10702, avg=6538.66, stdev=4109.11 00:28:18.443 clat percentiles (msec): 00:28:18.443 | 1.00th=[ 58], 5.00th=[ 2005], 10.00th=[ 2022], 20.00th=[ 2056], 00:28:18.443 | 30.00th=[ 2140], 40.00th=[ 2165], 50.00th=[ 6409], 60.00th=[10537], 00:28:18.443 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:28:18.443 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:28:18.443 | 99.99th=[10671] 00:28:18.443 lat (msec) : 100=2.44%, 2000=2.44%, >=2000=95.12% 00:28:18.443 cpu : usr=0.00%, sys=0.36%, ctx=77, majf=0, minf=10497 00:28:18.443 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:28:18.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.443 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:18.443 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.443 job1: (groupid=0, jobs=1): err= 0: pid=3458210: Wed Nov 27 05:45:13 2024 00:28:18.443 read: IOPS=3, BW=3139KiB/s (3214kB/s)(33.0MiB/10765msec) 00:28:18.443 slat (usec): min=942, max=2137.5k, avg=324448.20, stdev=743281.50 00:28:18.443 clat (msec): min=57, max=10761, avg=8243.16, stdev=3792.82 00:28:18.443 lat (msec): min=2019, max=10764, avg=8567.61, stdev=3518.75 00:28:18.443 clat percentiles (msec): 00:28:18.443 | 1.00th=[ 58], 5.00th=[ 2022], 10.00th=[ 2106], 20.00th=[ 2232], 00:28:18.443 | 30.00th=[ 6409], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:28:18.443 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:28:18.443 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:28:18.443 | 99.99th=[10805] 00:28:18.443 lat (msec) : 100=3.03%, >=2000=96.97% 00:28:18.443 cpu : usr=0.00%, sys=0.40%, ctx=85, majf=0, minf=8449 00:28:18.443 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:28:18.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.443 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:18.443 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.443 job1: (groupid=0, jobs=1): err= 0: pid=3458211: Wed Nov 27 05:45:13 2024 00:28:18.443 read: IOPS=6, BW=6832KiB/s (6996kB/s)(72.0MiB/10792msec) 00:28:18.443 slat (usec): min=846, max=2081.7k, avg=149061.95, stdev=518255.85 00:28:18.443 clat (msec): min=59, max=10790, avg=8378.23, stdev=3476.77 00:28:18.443 lat (msec): min=2068, max=10791, avg=8527.29, stdev=3342.55 00:28:18.443 clat percentiles (msec): 00:28:18.443 | 1.00th=[ 59], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4279], 00:28:18.443 | 30.00th=[ 6477], 40.00th=[10671], 50.00th=[10671], 60.00th=[10805], 00:28:18.443 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:28:18.443 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:28:18.443 | 99.99th=[10805] 00:28:18.443 lat (msec) : 100=1.39%, >=2000=98.61% 00:28:18.443 cpu : usr=0.00%, sys=0.69%, ctx=115, majf=0, minf=18433 00:28:18.443 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:28:18.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.443 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:18.443 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.443 job1: (groupid=0, jobs=1): err= 0: pid=3458212: Wed Nov 27 05:45:13 2024 00:28:18.443 read: IOPS=64, BW=64.2MiB/s (67.3MB/s)(687MiB/10706msec) 00:28:18.443 slat (usec): min=43, max=1928.5k, avg=15575.42, stdev=104067.79 00:28:18.443 clat (usec): min=1704, max=5960.4k, avg=1617078.79, stdev=883318.56 00:28:18.443 lat (msec): min=609, max=5967, avg=1632.65, stdev=893.71 00:28:18.443 clat percentiles (msec): 00:28:18.443 | 1.00th=[ 617], 5.00th=[ 667], 10.00th=[ 676], 20.00th=[ 676], 00:28:18.443 | 30.00th=[ 726], 40.00th=[ 1368], 50.00th=[ 1519], 60.00th=[ 1821], 00:28:18.443 | 70.00th=[ 1955], 80.00th=[ 2400], 90.00th=[ 2869], 95.00th=[ 3171], 00:28:18.443 | 99.00th=[ 3507], 99.50th=[ 3977], 99.90th=[ 5940], 99.95th=[ 5940], 00:28:18.443 | 99.99th=[ 5940] 00:28:18.443 bw ( KiB/s): min=26677, max=182272, per=2.55%, avg=88071.46, stdev=45999.62, samples=13 00:28:18.443 iops : min= 26, max= 178, avg=85.92, stdev=44.94, samples=13 00:28:18.443 lat (msec) : 2=0.15%, 750=30.71%, 1000=3.06%, 2000=40.76%, >=2000=25.33% 00:28:18.443 cpu : usr=0.03%, sys=1.18%, ctx=1019, majf=0, minf=32769 00:28:18.443 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:28:18.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.444 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.444 issued rwts: total=687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.444 job1: (groupid=0, jobs=1): err= 0: pid=3458213: Wed Nov 27 05:45:13 2024 00:28:18.444 read: IOPS=27, BW=27.7MiB/s (29.1MB/s)(295MiB/10632msec) 00:28:18.444 slat (usec): min=40, max=2086.6k, avg=35835.08, stdev=229802.72 00:28:18.444 clat (msec): min=59, max=9114, avg=4323.87, stdev=3525.70 00:28:18.444 lat (msec): min=650, max=9115, avg=4359.71, stdev=3523.21 00:28:18.444 clat percentiles (msec): 00:28:18.444 | 1.00th=[ 651], 5.00th=[ 852], 10.00th=[ 1003], 20.00th=[ 1036], 00:28:18.444 | 30.00th=[ 1045], 40.00th=[ 1183], 50.00th=[ 2735], 60.00th=[ 6141], 00:28:18.444 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 9060], 00:28:18.444 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:28:18.444 | 99.99th=[ 9060] 00:28:18.444 bw ( KiB/s): min= 2052, max=217088, per=1.25%, avg=43008.50, stdev=73181.15, samples=8 00:28:18.444 iops : min= 2, max= 212, avg=42.00, stdev=71.47, samples=8 00:28:18.444 lat (msec) : 100=0.34%, 750=2.71%, 1000=7.12%, 2000=36.61%, >=2000=53.22% 00:28:18.444 cpu : usr=0.00%, sys=0.97%, ctx=404, majf=0, minf=32769 00:28:18.444 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.6% 00:28:18.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.444 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:18.444 issued rwts: total=295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.444 job1: (groupid=0, jobs=1): err= 0: pid=3458214: Wed Nov 27 05:45:13 2024 00:28:18.444 read: IOPS=17, BW=17.6MiB/s (18.4MB/s)(188MiB/10707msec) 00:28:18.444 slat (usec): min=36, max=2059.4k, avg=56629.99, stdev=287358.07 00:28:18.444 clat (msec): min=59, max=8528, avg=4531.71, stdev=1603.27 00:28:18.444 lat (msec): min=1771, max=8529, avg=4588.34, stdev=1592.39 00:28:18.444 clat percentiles (msec): 00:28:18.444 | 1.00th=[ 1770], 5.00th=[ 1787], 10.00th=[ 1838], 20.00th=[ 3742], 00:28:18.444 | 30.00th=[ 4245], 40.00th=[ 4530], 50.00th=[ 4665], 60.00th=[ 5134], 00:28:18.444 | 70.00th=[ 5403], 80.00th=[ 5738], 90.00th=[ 5940], 95.00th=[ 6007], 00:28:18.444 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:28:18.444 | 99.99th=[ 8557] 00:28:18.444 bw ( KiB/s): min= 1610, max=83968, per=0.72%, avg=24898.00, stdev=33427.40, samples=5 00:28:18.444 iops : min= 1, max= 82, avg=24.20, stdev=32.74, samples=5 00:28:18.444 lat (msec) : 100=0.53%, 2000=15.96%, >=2000=83.51% 00:28:18.444 cpu : usr=0.01%, sys=0.80%, ctx=373, majf=0, minf=32769 00:28:18.444 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.3%, 16=8.5%, 32=17.0%, >=64=66.5% 00:28:18.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.444 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:28:18.444 issued rwts: total=188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.444 job1: (groupid=0, jobs=1): err= 0: pid=3458215: Wed Nov 27 05:45:13 2024 00:28:18.444 read: IOPS=26, BW=26.2MiB/s (27.4MB/s)(281MiB/10735msec) 00:28:18.444 slat (usec): min=50, max=2082.1k, avg=35592.96, stdev=242159.72 00:28:18.444 clat (msec): min=731, max=9208, avg=2703.58, stdev=3205.03 00:28:18.444 lat (msec): min=736, max=9223, avg=2739.17, stdev=3227.91 00:28:18.444 clat percentiles (msec): 00:28:18.444 | 1.00th=[ 743], 5.00th=[ 827], 10.00th=[ 835], 20.00th=[ 852], 00:28:18.444 | 30.00th=[ 852], 40.00th=[ 860], 50.00th=[ 1053], 60.00th=[ 1267], 00:28:18.444 | 70.00th=[ 1469], 80.00th=[ 5067], 90.00th=[ 9060], 95.00th=[ 9194], 00:28:18.444 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:28:18.444 | 99.99th=[ 9194] 00:28:18.444 bw ( KiB/s): min= 2048, max=159744, per=3.05%, avg=105130.67, stdev=89325.05, samples=3 00:28:18.444 iops : min= 2, max= 156, avg=102.67, stdev=87.23, samples=3 00:28:18.444 lat (msec) : 750=3.20%, 1000=44.13%, 2000=29.18%, >=2000=23.49% 00:28:18.444 cpu : usr=0.00%, sys=1.44%, ctx=249, majf=0, minf=32769 00:28:18.444 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.4%, >=64=77.6% 00:28:18.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.444 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:18.444 issued rwts: total=281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.444 job1: (groupid=0, jobs=1): err= 0: pid=3458216: Wed Nov 27 05:45:13 2024 00:28:18.444 read: IOPS=150, BW=150MiB/s (157MB/s)(1582MiB/10541msec) 00:28:18.444 slat (usec): min=37, max=1816.9k, avg=6654.28, stdev=46664.93 00:28:18.444 clat (msec): min=2, max=2878, avg=761.63, stdev=601.90 00:28:18.444 lat (msec): min=283, max=2880, avg=768.29, stdev=604.74 00:28:18.444 clat percentiles (msec): 00:28:18.444 | 1.00th=[ 284], 5.00th=[ 288], 10.00th=[ 288], 20.00th=[ 288], 00:28:18.444 | 30.00th=[ 292], 40.00th=[ 305], 50.00th=[ 768], 60.00th=[ 852], 00:28:18.444 | 70.00th=[ 869], 80.00th=[ 936], 90.00th=[ 1703], 95.00th=[ 2299], 00:28:18.444 | 99.00th=[ 2769], 99.50th=[ 2802], 99.90th=[ 2869], 99.95th=[ 2869], 00:28:18.444 | 99.99th=[ 2869] 00:28:18.444 bw ( KiB/s): min=65536, max=454656, per=6.17%, avg=212699.43, stdev=135139.01, samples=14 00:28:18.444 iops : min= 64, max= 444, avg=207.71, stdev=131.97, samples=14 00:28:18.444 lat (msec) : 4=0.06%, 500=46.46%, 750=2.84%, 1000=34.89%, 2000=9.42% 00:28:18.444 lat (msec) : >=2000=6.32% 00:28:18.444 cpu : usr=0.11%, sys=2.17%, ctx=1654, majf=0, minf=32769 00:28:18.444 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:28:18.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.444 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.444 issued rwts: total=1582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.444 job1: (groupid=0, jobs=1): err= 0: pid=3458217: Wed Nov 27 05:45:13 2024 00:28:18.444 read: IOPS=5, BW=5615KiB/s (5750kB/s)(59.0MiB/10760msec) 00:28:18.444 slat (usec): min=776, max=2090.0k, avg=181037.39, stdev=567651.27 00:28:18.444 clat (msec): min=77, max=10754, avg=5681.09, stdev=3897.59 00:28:18.444 lat (msec): min=1999, max=10759, avg=5862.13, stdev=3880.86 00:28:18.444 clat percentiles (msec): 00:28:18.444 | 1.00th=[ 79], 5.00th=[ 2005], 10.00th=[ 2005], 20.00th=[ 2123], 00:28:18.444 | 30.00th=[ 2140], 40.00th=[ 2165], 50.00th=[ 4279], 60.00th=[ 6477], 00:28:18.444 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10805], 00:28:18.444 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:28:18.444 | 99.99th=[10805] 00:28:18.444 lat (msec) : 100=1.69%, 2000=1.69%, >=2000=96.61% 00:28:18.444 cpu : usr=0.02%, sys=0.58%, ctx=90, majf=0, minf=15105 00:28:18.444 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:28:18.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.444 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:18.444 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.444 job1: (groupid=0, jobs=1): err= 0: pid=3458218: Wed Nov 27 05:45:13 2024 00:28:18.444 read: IOPS=23, BW=23.8MiB/s (25.0MB/s)(255MiB/10711msec) 00:28:18.444 slat (usec): min=457, max=2075.3k, avg=41989.68, stdev=249828.42 00:28:18.444 clat (usec): min=1593, max=7463.0k, avg=3474657.60, stdev=2224933.53 00:28:18.444 lat (msec): min=1221, max=7469, avg=3516.65, stdev=2234.13 00:28:18.444 clat percentiles (msec): 00:28:18.444 | 1.00th=[ 1217], 5.00th=[ 1267], 10.00th=[ 1284], 20.00th=[ 1351], 00:28:18.444 | 30.00th=[ 2165], 40.00th=[ 2500], 50.00th=[ 2802], 60.00th=[ 3171], 00:28:18.445 | 70.00th=[ 3406], 80.00th=[ 7148], 90.00th=[ 7349], 95.00th=[ 7416], 00:28:18.445 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 7483], 99.95th=[ 7483], 00:28:18.445 | 99.99th=[ 7483] 00:28:18.445 bw ( KiB/s): min=22528, max=83800, per=1.88%, avg=64982.00, stdev=28604.57, samples=4 00:28:18.445 iops : min= 22, max= 81, avg=63.25, stdev=27.75, samples=4 00:28:18.445 lat (msec) : 2=0.39%, 2000=27.45%, >=2000=72.16% 00:28:18.445 cpu : usr=0.00%, sys=1.19%, ctx=488, majf=0, minf=32769 00:28:18.445 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.5%, >=64=75.3% 00:28:18.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.445 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:28:18.445 issued rwts: total=255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.445 job2: (groupid=0, jobs=1): err= 0: pid=3458224: Wed Nov 27 05:45:13 2024 00:28:18.445 read: IOPS=66, BW=66.6MiB/s (69.8MB/s)(667MiB/10014msec) 00:28:18.445 slat (usec): min=37, max=2087.5k, avg=14986.22, stdev=82845.45 00:28:18.445 clat (msec): min=13, max=4578, avg=1805.19, stdev=1372.90 00:28:18.445 lat (msec): min=13, max=4581, avg=1820.17, stdev=1378.70 00:28:18.445 clat percentiles (msec): 00:28:18.445 | 1.00th=[ 20], 5.00th=[ 46], 10.00th=[ 481], 20.00th=[ 592], 00:28:18.445 | 30.00th=[ 617], 40.00th=[ 651], 50.00th=[ 1284], 60.00th=[ 2366], 00:28:18.445 | 70.00th=[ 3071], 80.00th=[ 3306], 90.00th=[ 3540], 95.00th=[ 4010], 00:28:18.445 | 99.00th=[ 4463], 99.50th=[ 4530], 99.90th=[ 4597], 99.95th=[ 4597], 00:28:18.445 | 99.99th=[ 4597] 00:28:18.445 bw ( KiB/s): min= 4096, max=227328, per=1.87%, avg=64443.73, stdev=66149.66, samples=15 00:28:18.445 iops : min= 4, max= 222, avg=62.93, stdev=64.60, samples=15 00:28:18.445 lat (msec) : 20=1.20%, 50=4.20%, 100=1.65%, 250=1.20%, 500=1.95% 00:28:18.445 lat (msec) : 750=35.83%, 1000=2.70%, 2000=6.60%, >=2000=44.68% 00:28:18.445 cpu : usr=0.05%, sys=1.77%, ctx=1241, majf=0, minf=32769 00:28:18.445 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:28:18.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.445 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.445 issued rwts: total=667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.445 job2: (groupid=0, jobs=1): err= 0: pid=3458225: Wed Nov 27 05:45:13 2024 00:28:18.445 read: IOPS=79, BW=79.7MiB/s (83.6MB/s)(805MiB/10095msec) 00:28:18.445 slat (usec): min=54, max=2043.9k, avg=12420.43, stdev=100100.61 00:28:18.445 clat (msec): min=88, max=5628, avg=1034.98, stdev=959.74 00:28:18.445 lat (msec): min=96, max=5633, avg=1047.40, stdev=973.00 00:28:18.445 clat percentiles (msec): 00:28:18.445 | 1.00th=[ 174], 5.00th=[ 384], 10.00th=[ 667], 20.00th=[ 735], 00:28:18.445 | 30.00th=[ 802], 40.00th=[ 818], 50.00th=[ 844], 60.00th=[ 852], 00:28:18.445 | 70.00th=[ 860], 80.00th=[ 902], 90.00th=[ 1385], 95.00th=[ 1620], 00:28:18.445 | 99.00th=[ 5604], 99.50th=[ 5604], 99.90th=[ 5604], 99.95th=[ 5604], 00:28:18.445 | 99.99th=[ 5604] 00:28:18.445 bw ( KiB/s): min=133120, max=180224, per=4.47%, avg=154282.67, stdev=16158.45, samples=9 00:28:18.445 iops : min= 130, max= 176, avg=150.67, stdev=15.78, samples=9 00:28:18.445 lat (msec) : 100=0.25%, 250=1.99%, 500=5.22%, 750=14.78%, 1000=63.73% 00:28:18.445 lat (msec) : 2000=9.07%, >=2000=4.97% 00:28:18.445 cpu : usr=0.06%, sys=2.27%, ctx=790, majf=0, minf=32769 00:28:18.445 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:28:18.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.445 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.445 issued rwts: total=805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.445 job2: (groupid=0, jobs=1): err= 0: pid=3458226: Wed Nov 27 05:45:13 2024 00:28:18.445 read: IOPS=22, BW=22.0MiB/s (23.1MB/s)(234MiB/10619msec) 00:28:18.445 slat (usec): min=433, max=2049.0k, avg=45044.85, stdev=198147.84 00:28:18.445 clat (msec): min=77, max=8549, avg=3078.16, stdev=1165.85 00:28:18.445 lat (msec): min=1156, max=8598, avg=3123.21, stdev=1180.08 00:28:18.445 clat percentiles (msec): 00:28:18.445 | 1.00th=[ 1167], 5.00th=[ 1301], 10.00th=[ 1452], 20.00th=[ 1703], 00:28:18.445 | 30.00th=[ 2433], 40.00th=[ 2802], 50.00th=[ 3406], 60.00th=[ 3675], 00:28:18.445 | 70.00th=[ 3910], 80.00th=[ 3943], 90.00th=[ 4144], 95.00th=[ 4530], 00:28:18.445 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 8557], 99.95th=[ 8557], 00:28:18.445 | 99.99th=[ 8557] 00:28:18.445 bw ( KiB/s): min= 2052, max=77824, per=0.91%, avg=31305.71, stdev=24136.48, samples=7 00:28:18.445 iops : min= 2, max= 76, avg=30.57, stdev=23.57, samples=7 00:28:18.445 lat (msec) : 100=0.43%, 2000=22.65%, >=2000=76.92% 00:28:18.445 cpu : usr=0.00%, sys=1.02%, ctx=860, majf=0, minf=32769 00:28:18.445 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.7%, >=64=73.1% 00:28:18.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.445 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:28:18.445 issued rwts: total=234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.445 job2: (groupid=0, jobs=1): err= 0: pid=3458227: Wed Nov 27 05:45:13 2024 00:28:18.445 read: IOPS=2, BW=2989KiB/s (3061kB/s)(31.0MiB/10620msec) 00:28:18.445 slat (usec): min=1066, max=2067.4k, avg=339855.49, stdev=748574.57 00:28:18.445 clat (msec): min=84, max=10595, avg=5592.18, stdev=2978.12 00:28:18.445 lat (msec): min=2101, max=10619, avg=5932.04, stdev=2929.37 00:28:18.445 clat percentiles (msec): 00:28:18.445 | 1.00th=[ 85], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2198], 00:28:18.445 | 30.00th=[ 4245], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6409], 00:28:18.445 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[10537], 95.00th=[10537], 00:28:18.445 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:28:18.445 | 99.99th=[10537] 00:28:18.445 lat (msec) : 100=3.23%, >=2000=96.77% 00:28:18.445 cpu : usr=0.01%, sys=0.24%, ctx=74, majf=0, minf=7937 00:28:18.445 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:28:18.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:18.445 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.445 job2: (groupid=0, jobs=1): err= 0: pid=3458228: Wed Nov 27 05:45:13 2024 00:28:18.445 read: IOPS=27, BW=27.2MiB/s (28.5MB/s)(272MiB/10017msec) 00:28:18.445 slat (usec): min=458, max=2061.3k, avg=36764.46, stdev=171874.51 00:28:18.445 clat (msec): min=15, max=7139, avg=2004.67, stdev=950.71 00:28:18.445 lat (msec): min=18, max=7235, avg=2041.44, stdev=999.75 00:28:18.445 clat percentiles (msec): 00:28:18.445 | 1.00th=[ 20], 5.00th=[ 159], 10.00th=[ 451], 20.00th=[ 1418], 00:28:18.445 | 30.00th=[ 1787], 40.00th=[ 2005], 50.00th=[ 2072], 60.00th=[ 2232], 00:28:18.445 | 70.00th=[ 2467], 80.00th=[ 2567], 90.00th=[ 2903], 95.00th=[ 3037], 00:28:18.445 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 7148], 99.95th=[ 7148], 00:28:18.445 | 99.99th=[ 7148] 00:28:18.445 bw ( KiB/s): min=24576, max=83968, per=1.38%, avg=47513.60, stdev=22869.84, samples=5 00:28:18.445 iops : min= 24, max= 82, avg=46.40, stdev=22.33, samples=5 00:28:18.445 lat (msec) : 20=1.10%, 50=1.47%, 100=0.37%, 250=3.68%, 500=4.04% 00:28:18.445 lat (msec) : 750=1.84%, 1000=2.57%, 2000=25.00%, >=2000=59.93% 00:28:18.445 cpu : usr=0.00%, sys=0.96%, ctx=895, majf=0, minf=32769 00:28:18.445 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.9%, 32=11.8%, >=64=76.8% 00:28:18.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.446 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:28:18.446 issued rwts: total=272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.446 job2: (groupid=0, jobs=1): err= 0: pid=3458229: Wed Nov 27 05:45:13 2024 00:28:18.446 read: IOPS=6, BW=6986KiB/s (7153kB/s)(73.0MiB/10701msec) 00:28:18.446 slat (usec): min=500, max=2066.0k, avg=145369.73, stdev=507340.17 00:28:18.446 clat (msec): min=88, max=10698, avg=5570.15, stdev=3451.49 00:28:18.446 lat (msec): min=1994, max=10700, avg=5715.52, stdev=3440.88 00:28:18.446 clat percentiles (msec): 00:28:18.446 | 1.00th=[ 89], 5.00th=[ 2039], 10.00th=[ 2072], 20.00th=[ 2140], 00:28:18.446 | 30.00th=[ 2198], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 6477], 00:28:18.446 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:28:18.446 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:28:18.446 | 99.99th=[10671] 00:28:18.446 lat (msec) : 100=1.37%, 2000=2.74%, >=2000=95.89% 00:28:18.446 cpu : usr=0.01%, sys=0.61%, ctx=64, majf=0, minf=18689 00:28:18.446 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:28:18.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:18.446 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.446 job2: (groupid=0, jobs=1): err= 0: pid=3458230: Wed Nov 27 05:45:13 2024 00:28:18.446 read: IOPS=27, BW=27.2MiB/s (28.5MB/s)(292MiB/10744msec) 00:28:18.446 slat (usec): min=51, max=2061.3k, avg=36523.31, stdev=176559.91 00:28:18.446 clat (msec): min=77, max=6467, avg=3097.79, stdev=1359.56 00:28:18.446 lat (msec): min=1152, max=6541, avg=3134.31, stdev=1364.03 00:28:18.446 clat percentiles (msec): 00:28:18.446 | 1.00th=[ 1150], 5.00th=[ 1351], 10.00th=[ 1552], 20.00th=[ 2265], 00:28:18.446 | 30.00th=[ 2635], 40.00th=[ 2769], 50.00th=[ 2903], 60.00th=[ 2970], 00:28:18.446 | 70.00th=[ 3037], 80.00th=[ 3473], 90.00th=[ 6208], 95.00th=[ 6409], 00:28:18.446 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:28:18.446 | 99.99th=[ 6477] 00:28:18.446 bw ( KiB/s): min= 1434, max=65405, per=1.09%, avg=37427.44, stdev=20216.68, samples=9 00:28:18.446 iops : min= 1, max= 63, avg=36.22, stdev=19.80, samples=9 00:28:18.446 lat (msec) : 100=0.34%, 2000=16.10%, >=2000=83.56% 00:28:18.446 cpu : usr=0.03%, sys=1.03%, ctx=922, majf=0, minf=32769 00:28:18.446 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:28:18.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.446 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:18.446 issued rwts: total=292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.446 job2: (groupid=0, jobs=1): err= 0: pid=3458231: Wed Nov 27 05:45:13 2024 00:28:18.446 read: IOPS=7, BW=7316KiB/s (7491kB/s)(76.0MiB/10638msec) 00:28:18.446 slat (usec): min=522, max=2067.7k, avg=138933.68, stdev=498498.73 00:28:18.446 clat (msec): min=77, max=10627, avg=6875.53, stdev=3242.62 00:28:18.446 lat (msec): min=2102, max=10637, avg=7014.46, stdev=3172.95 00:28:18.446 clat percentiles (msec): 00:28:18.446 | 1.00th=[ 79], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4245], 00:28:18.446 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:28:18.446 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:28:18.446 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:28:18.446 | 99.99th=[10671] 00:28:18.446 lat (msec) : 100=1.32%, >=2000=98.68% 00:28:18.446 cpu : usr=0.02%, sys=0.47%, ctx=68, majf=0, minf=19457 00:28:18.446 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.5%, 16=21.1%, 32=42.1%, >=64=17.1% 00:28:18.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:18.446 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.446 job2: (groupid=0, jobs=1): err= 0: pid=3458232: Wed Nov 27 05:45:13 2024 00:28:18.446 read: IOPS=19, BW=19.5MiB/s (20.5MB/s)(196MiB/10037msec) 00:28:18.446 slat (usec): min=1388, max=2118.4k, avg=51021.25, stdev=202097.79 00:28:18.446 clat (msec): min=35, max=8130, avg=2957.58, stdev=1864.23 00:28:18.446 lat (msec): min=40, max=8151, avg=3008.60, stdev=1905.65 00:28:18.446 clat percentiles (msec): 00:28:18.446 | 1.00th=[ 41], 5.00th=[ 321], 10.00th=[ 550], 20.00th=[ 1083], 00:28:18.446 | 30.00th=[ 1754], 40.00th=[ 2500], 50.00th=[ 3004], 60.00th=[ 3708], 00:28:18.446 | 70.00th=[ 3910], 80.00th=[ 4044], 90.00th=[ 4111], 95.00th=[ 7819], 00:28:18.446 | 99.00th=[ 8087], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:28:18.446 | 99.99th=[ 8154] 00:28:18.446 bw ( KiB/s): min=12288, max=40960, per=0.81%, avg=27930.80, stdev=10549.84, samples=5 00:28:18.446 iops : min= 12, max= 40, avg=27.20, stdev=10.26, samples=5 00:28:18.446 lat (msec) : 50=1.02%, 250=3.06%, 500=4.08%, 750=5.61%, 1000=4.59% 00:28:18.446 lat (msec) : 2000=13.78%, >=2000=67.86% 00:28:18.446 cpu : usr=0.00%, sys=1.04%, ctx=899, majf=0, minf=32769 00:28:18.446 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.1%, 16=8.2%, 32=16.3%, >=64=67.9% 00:28:18.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.446 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:28:18.446 issued rwts: total=196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.446 job2: (groupid=0, jobs=1): err= 0: pid=3458233: Wed Nov 27 05:45:13 2024 00:28:18.446 read: IOPS=2, BW=2992KiB/s (3064kB/s)(31.0MiB/10609msec) 00:28:18.446 slat (usec): min=583, max=2086.9k, avg=339589.50, stdev=745242.98 00:28:18.446 clat (msec): min=80, max=10599, avg=5991.77, stdev=3287.89 00:28:18.446 lat (msec): min=2105, max=10608, avg=6331.36, stdev=3199.49 00:28:18.446 clat percentiles (msec): 00:28:18.446 | 1.00th=[ 81], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 2165], 00:28:18.446 | 30.00th=[ 4245], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:28:18.446 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10537], 95.00th=[10537], 00:28:18.446 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:28:18.446 | 99.99th=[10537] 00:28:18.446 lat (msec) : 100=3.23%, >=2000=96.77% 00:28:18.446 cpu : usr=0.00%, sys=0.26%, ctx=71, majf=0, minf=7937 00:28:18.446 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:28:18.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:18.446 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.447 job2: (groupid=0, jobs=1): err= 0: pid=3458234: Wed Nov 27 05:45:13 2024 00:28:18.447 read: IOPS=30, BW=30.7MiB/s (32.2MB/s)(309MiB/10075msec) 00:28:18.447 slat (usec): min=618, max=2129.5k, avg=32389.30, stdev=152169.75 00:28:18.447 clat (msec): min=64, max=7537, avg=2976.77, stdev=2218.52 00:28:18.447 lat (msec): min=75, max=7561, avg=3009.16, stdev=2229.82 00:28:18.447 clat percentiles (msec): 00:28:18.447 | 1.00th=[ 132], 5.00th=[ 338], 10.00th=[ 659], 20.00th=[ 1217], 00:28:18.447 | 30.00th=[ 1636], 40.00th=[ 1989], 50.00th=[ 2123], 60.00th=[ 2903], 00:28:18.447 | 70.00th=[ 3306], 80.00th=[ 3842], 90.00th=[ 7215], 95.00th=[ 7349], 00:28:18.447 | 99.00th=[ 7483], 99.50th=[ 7550], 99.90th=[ 7550], 99.95th=[ 7550], 00:28:18.447 | 99.99th=[ 7550] 00:28:18.447 bw ( KiB/s): min=16384, max=81920, per=1.33%, avg=45963.75, stdev=23683.77, samples=8 00:28:18.447 iops : min= 16, max= 80, avg=44.75, stdev=23.13, samples=8 00:28:18.447 lat (msec) : 100=0.65%, 250=2.27%, 500=5.18%, 750=2.59%, 1000=4.21% 00:28:18.447 lat (msec) : 2000=25.57%, >=2000=59.55% 00:28:18.447 cpu : usr=0.01%, sys=1.33%, ctx=1087, majf=0, minf=32769 00:28:18.447 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.6% 00:28:18.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.447 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:28:18.447 issued rwts: total=309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.447 job2: (groupid=0, jobs=1): err= 0: pid=3458235: Wed Nov 27 05:45:13 2024 00:28:18.447 read: IOPS=65, BW=65.2MiB/s (68.4MB/s)(657MiB/10073msec) 00:28:18.447 slat (usec): min=67, max=1903.6k, avg=15214.92, stdev=76934.53 00:28:18.447 clat (msec): min=71, max=5307, avg=1877.57, stdev=1470.37 00:28:18.447 lat (msec): min=74, max=5308, avg=1892.78, stdev=1476.49 00:28:18.447 clat percentiles (msec): 00:28:18.447 | 1.00th=[ 92], 5.00th=[ 550], 10.00th=[ 584], 20.00th=[ 625], 00:28:18.447 | 30.00th=[ 651], 40.00th=[ 944], 50.00th=[ 1469], 60.00th=[ 1989], 00:28:18.447 | 70.00th=[ 2165], 80.00th=[ 3071], 90.00th=[ 4732], 95.00th=[ 5000], 00:28:18.447 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5336], 99.95th=[ 5336], 00:28:18.447 | 99.99th=[ 5336] 00:28:18.447 bw ( KiB/s): min=20480, max=233472, per=1.96%, avg=67609.00, stdev=51983.40, samples=16 00:28:18.447 iops : min= 20, max= 228, avg=66.00, stdev=50.77, samples=16 00:28:18.447 lat (msec) : 100=1.07%, 250=1.07%, 500=1.98%, 750=33.64%, 1000=5.18% 00:28:18.447 lat (msec) : 2000=17.66%, >=2000=39.42% 00:28:18.447 cpu : usr=0.04%, sys=1.91%, ctx=1258, majf=0, minf=32769 00:28:18.447 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:28:18.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.447 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.447 issued rwts: total=657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.447 job2: (groupid=0, jobs=1): err= 0: pid=3458236: Wed Nov 27 05:45:13 2024 00:28:18.447 read: IOPS=29, BW=30.0MiB/s (31.5MB/s)(302MiB/10068msec) 00:28:18.447 slat (usec): min=457, max=2070.9k, avg=33116.90, stdev=163989.44 00:28:18.447 clat (msec): min=65, max=6934, avg=2272.82, stdev=1665.28 00:28:18.447 lat (msec): min=73, max=7015, avg=2305.94, stdev=1689.32 00:28:18.447 clat percentiles (msec): 00:28:18.447 | 1.00th=[ 94], 5.00th=[ 192], 10.00th=[ 334], 20.00th=[ 550], 00:28:18.447 | 30.00th=[ 919], 40.00th=[ 1636], 50.00th=[ 2232], 60.00th=[ 3071], 00:28:18.447 | 70.00th=[ 3171], 80.00th=[ 3339], 90.00th=[ 3540], 95.00th=[ 6745], 00:28:18.447 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946], 00:28:18.447 | 99.99th=[ 6946] 00:28:18.447 bw ( KiB/s): min=16384, max=115674, per=1.45%, avg=49878.00, stdev=35513.41, samples=7 00:28:18.447 iops : min= 16, max= 112, avg=48.57, stdev=34.38, samples=7 00:28:18.447 lat (msec) : 100=1.32%, 250=5.63%, 500=11.59%, 750=6.62%, 1000=6.95% 00:28:18.447 lat (msec) : 2000=12.58%, >=2000=55.30% 00:28:18.447 cpu : usr=0.02%, sys=1.09%, ctx=971, majf=0, minf=32769 00:28:18.447 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.1% 00:28:18.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.447 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:18.447 issued rwts: total=302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.447 job3: (groupid=0, jobs=1): err= 0: pid=3458237: Wed Nov 27 05:45:13 2024 00:28:18.447 read: IOPS=16, BW=16.5MiB/s (17.3MB/s)(167MiB/10102msec) 00:28:18.447 slat (usec): min=1103, max=2142.2k, avg=59940.92, stdev=313379.35 00:28:18.447 clat (msec): min=90, max=9804, avg=2840.11, stdev=3830.04 00:28:18.447 lat (msec): min=165, max=9809, avg=2900.05, stdev=3864.29 00:28:18.447 clat percentiles (msec): 00:28:18.447 | 1.00th=[ 165], 5.00th=[ 213], 10.00th=[ 264], 20.00th=[ 363], 00:28:18.447 | 30.00th=[ 468], 40.00th=[ 575], 50.00th=[ 684], 60.00th=[ 1045], 00:28:18.447 | 70.00th=[ 1368], 80.00th=[ 9597], 90.00th=[ 9731], 95.00th=[ 9731], 00:28:18.447 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:28:18.447 | 99.99th=[ 9866] 00:28:18.447 bw ( KiB/s): min=81920, max=81920, per=2.38%, avg=81920.00, stdev= 0.00, samples=1 00:28:18.447 iops : min= 80, max= 80, avg=80.00, stdev= 0.00, samples=1 00:28:18.447 lat (msec) : 100=0.60%, 250=8.38%, 500=25.15%, 750=18.56%, 1000=5.39% 00:28:18.447 lat (msec) : 2000=16.17%, >=2000=25.75% 00:28:18.447 cpu : usr=0.00%, sys=0.95%, ctx=444, majf=0, minf=32769 00:28:18.447 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.2%, >=64=62.3% 00:28:18.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.447 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:28:18.447 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.447 job3: (groupid=0, jobs=1): err= 0: pid=3458238: Wed Nov 27 05:45:13 2024 00:28:18.447 read: IOPS=55, BW=55.9MiB/s (58.6MB/s)(561MiB/10041msec) 00:28:18.447 slat (usec): min=56, max=2098.3k, avg=17827.51, stdev=143909.89 00:28:18.447 clat (msec): min=35, max=9300, avg=2189.21, stdev=2777.72 00:28:18.447 lat (msec): min=42, max=9334, avg=2207.04, stdev=2786.24 00:28:18.447 clat percentiles (msec): 00:28:18.447 | 1.00th=[ 72], 5.00th=[ 234], 10.00th=[ 477], 20.00th=[ 693], 00:28:18.447 | 30.00th=[ 701], 40.00th=[ 709], 50.00th=[ 709], 60.00th=[ 726], 00:28:18.447 | 70.00th=[ 835], 80.00th=[ 6812], 90.00th=[ 7349], 95.00th=[ 7416], 00:28:18.447 | 99.00th=[ 7550], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:28:18.447 | 99.99th=[ 9329] 00:28:18.447 bw ( KiB/s): min= 2048, max=190464, per=2.34%, avg=80786.55, stdev=77032.94, samples=11 00:28:18.447 iops : min= 2, max= 186, avg=78.82, stdev=75.22, samples=11 00:28:18.447 lat (msec) : 50=0.53%, 100=1.07%, 250=3.74%, 500=5.53%, 750=52.94% 00:28:18.447 lat (msec) : 1000=8.91%, 2000=3.74%, >=2000=23.53% 00:28:18.447 cpu : usr=0.05%, sys=1.88%, ctx=825, majf=0, minf=32770 00:28:18.447 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.8% 00:28:18.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.447 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.447 issued rwts: total=561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.448 job3: (groupid=0, jobs=1): err= 0: pid=3458239: Wed Nov 27 05:45:13 2024 00:28:18.448 read: IOPS=121, BW=121MiB/s (127MB/s)(1221MiB/10081msec) 00:28:18.448 slat (usec): min=40, max=1896.1k, avg=8191.71, stdev=54855.92 00:28:18.448 clat (msec): min=67, max=2750, avg=852.46, stdev=365.11 00:28:18.448 lat (msec): min=83, max=2754, avg=860.65, stdev=368.72 00:28:18.448 clat percentiles (msec): 00:28:18.448 | 1.00th=[ 155], 5.00th=[ 676], 10.00th=[ 684], 20.00th=[ 684], 00:28:18.448 | 30.00th=[ 693], 40.00th=[ 701], 50.00th=[ 726], 60.00th=[ 810], 00:28:18.448 | 70.00th=[ 869], 80.00th=[ 927], 90.00th=[ 1217], 95.00th=[ 1435], 00:28:18.448 | 99.00th=[ 2735], 99.50th=[ 2735], 99.90th=[ 2735], 99.95th=[ 2735], 00:28:18.448 | 99.99th=[ 2735] 00:28:18.448 bw ( KiB/s): min=51200, max=188416, per=4.33%, avg=149230.93, stdev=44815.09, samples=15 00:28:18.448 iops : min= 50, max= 184, avg=145.73, stdev=43.76, samples=15 00:28:18.448 lat (msec) : 100=0.49%, 250=1.47%, 500=0.57%, 750=49.80%, 1000=36.61% 00:28:18.448 lat (msec) : 2000=8.60%, >=2000=2.46% 00:28:18.448 cpu : usr=0.16%, sys=2.67%, ctx=1253, majf=0, minf=32769 00:28:18.448 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:28:18.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.448 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.448 issued rwts: total=1221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.448 job3: (groupid=0, jobs=1): err= 0: pid=3458240: Wed Nov 27 05:45:13 2024 00:28:18.448 read: IOPS=118, BW=119MiB/s (125MB/s)(1199MiB/10096msec) 00:28:18.448 slat (usec): min=45, max=2062.5k, avg=8337.43, stdev=89126.24 00:28:18.448 clat (msec): min=94, max=7751, avg=922.07, stdev=1440.15 00:28:18.448 lat (msec): min=135, max=9544, avg=930.41, stdev=1455.46 00:28:18.448 clat percentiles (msec): 00:28:18.448 | 1.00th=[ 136], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 224], 00:28:18.448 | 30.00th=[ 255], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 00:28:18.448 | 70.00th=[ 279], 80.00th=[ 1301], 90.00th=[ 3071], 95.00th=[ 4732], 00:28:18.448 | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 7684], 99.95th=[ 7752], 00:28:18.448 | 99.99th=[ 7752] 00:28:18.448 bw ( KiB/s): min= 8192, max=522240, per=5.79%, avg=199576.82, stdev=225308.27, samples=11 00:28:18.448 iops : min= 8, max= 510, avg=194.82, stdev=220.08, samples=11 00:28:18.448 lat (msec) : 100=0.08%, 250=25.52%, 500=50.79%, 750=0.83%, 1000=1.08% 00:28:18.448 lat (msec) : 2000=2.75%, >=2000=18.93% 00:28:18.448 cpu : usr=0.00%, sys=1.61%, ctx=2210, majf=0, minf=32769 00:28:18.448 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:28:18.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.448 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.448 issued rwts: total=1199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.448 job3: (groupid=0, jobs=1): err= 0: pid=3458241: Wed Nov 27 05:45:13 2024 00:28:18.448 read: IOPS=60, BW=61.0MiB/s (64.0MB/s)(611MiB/10017msec) 00:28:18.448 slat (usec): min=48, max=2081.8k, avg=16362.86, stdev=120585.88 00:28:18.448 clat (msec): min=14, max=6759, avg=1941.75, stdev=1891.34 00:28:18.448 lat (msec): min=16, max=6760, avg=1958.11, stdev=1901.23 00:28:18.448 clat percentiles (msec): 00:28:18.448 | 1.00th=[ 22], 5.00th=[ 48], 10.00th=[ 91], 20.00th=[ 709], 00:28:18.448 | 30.00th=[ 718], 40.00th=[ 743], 50.00th=[ 827], 60.00th=[ 1083], 00:28:18.448 | 70.00th=[ 3071], 80.00th=[ 4463], 90.00th=[ 4665], 95.00th=[ 6342], 00:28:18.448 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:28:18.448 | 99.99th=[ 6745] 00:28:18.448 bw ( KiB/s): min= 4096, max=176128, per=2.27%, avg=78438.40, stdev=55854.07, samples=10 00:28:18.448 iops : min= 4, max= 172, avg=76.60, stdev=54.54, samples=10 00:28:18.448 lat (msec) : 20=0.82%, 50=4.42%, 100=5.24%, 250=3.93%, 500=2.13% 00:28:18.448 lat (msec) : 750=27.33%, 1000=12.11%, 2000=7.53%, >=2000=36.50% 00:28:18.448 cpu : usr=0.04%, sys=1.66%, ctx=924, majf=0, minf=32769 00:28:18.448 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:28:18.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.448 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.448 issued rwts: total=611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.448 job3: (groupid=0, jobs=1): err= 0: pid=3458242: Wed Nov 27 05:45:13 2024 00:28:18.448 read: IOPS=58, BW=58.4MiB/s (61.2MB/s)(587MiB/10051msec) 00:28:18.448 slat (usec): min=419, max=2086.5k, avg=17054.49, stdev=142357.83 00:28:18.448 clat (msec): min=35, max=6923, avg=963.93, stdev=1110.64 00:28:18.448 lat (msec): min=57, max=6928, avg=980.98, stdev=1137.07 00:28:18.448 clat percentiles (msec): 00:28:18.448 | 1.00th=[ 71], 5.00th=[ 321], 10.00th=[ 600], 20.00th=[ 617], 00:28:18.448 | 30.00th=[ 625], 40.00th=[ 625], 50.00th=[ 634], 60.00th=[ 693], 00:28:18.448 | 70.00th=[ 944], 80.00th=[ 1099], 90.00th=[ 1234], 95.00th=[ 1334], 00:28:18.448 | 99.00th=[ 6879], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:28:18.448 | 99.99th=[ 6946] 00:28:18.448 bw ( KiB/s): min=43008, max=208896, per=3.87%, avg=133570.14, stdev=66328.06, samples=7 00:28:18.448 iops : min= 42, max= 204, avg=130.43, stdev=64.78, samples=7 00:28:18.448 lat (msec) : 50=0.17%, 100=1.36%, 250=2.39%, 500=3.92%, 750=54.17% 00:28:18.448 lat (msec) : 1000=11.93%, 2000=21.98%, >=2000=4.09% 00:28:18.448 cpu : usr=0.03%, sys=1.69%, ctx=1414, majf=0, minf=32769 00:28:18.448 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.3% 00:28:18.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.448 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.448 issued rwts: total=587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.448 job3: (groupid=0, jobs=1): err= 0: pid=3458243: Wed Nov 27 05:45:13 2024 00:28:18.448 read: IOPS=119, BW=119MiB/s (125MB/s)(1197MiB/10017msec) 00:28:18.448 slat (usec): min=49, max=2064.9k, avg=8350.31, stdev=84570.62 00:28:18.448 clat (msec): min=15, max=5980, avg=1012.81, stdev=1597.84 00:28:18.448 lat (msec): min=19, max=5984, avg=1021.16, stdev=1604.05 00:28:18.448 clat percentiles (msec): 00:28:18.448 | 1.00th=[ 58], 5.00th=[ 275], 10.00th=[ 284], 20.00th=[ 288], 00:28:18.448 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 326], 60.00th=[ 617], 00:28:18.448 | 70.00th=[ 718], 80.00th=[ 776], 90.00th=[ 4933], 95.00th=[ 5738], 00:28:18.448 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 6007], 99.95th=[ 6007], 00:28:18.448 | 99.99th=[ 6007] 00:28:18.448 bw ( KiB/s): min= 6144, max=450560, per=4.93%, avg=169953.00, stdev=165740.97, samples=12 00:28:18.448 iops : min= 6, max= 440, avg=165.92, stdev=161.85, samples=12 00:28:18.448 lat (msec) : 20=0.17%, 50=0.75%, 100=0.75%, 250=1.34%, 500=52.13% 00:28:18.448 lat (msec) : 750=21.72%, 1000=10.36%, 2000=1.84%, >=2000=10.94% 00:28:18.448 cpu : usr=0.03%, sys=1.79%, ctx=2309, majf=0, minf=32769 00:28:18.448 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:28:18.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.448 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.448 issued rwts: total=1197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.448 job3: (groupid=0, jobs=1): err= 0: pid=3458244: Wed Nov 27 05:45:13 2024 00:28:18.448 read: IOPS=56, BW=56.7MiB/s (59.5MB/s)(570MiB/10053msec) 00:28:18.448 slat (usec): min=57, max=1997.6k, avg=17567.08, stdev=107890.34 00:28:18.448 clat (msec): min=35, max=5414, avg=1621.15, stdev=1264.01 00:28:18.449 lat (msec): min=74, max=5420, avg=1638.72, stdev=1271.85 00:28:18.449 clat percentiles (msec): 00:28:18.449 | 1.00th=[ 209], 5.00th=[ 785], 10.00th=[ 844], 20.00th=[ 894], 00:28:18.449 | 30.00th=[ 1003], 40.00th=[ 1083], 50.00th=[ 1200], 60.00th=[ 1234], 00:28:18.449 | 70.00th=[ 1569], 80.00th=[ 1821], 90.00th=[ 3608], 95.00th=[ 5269], 00:28:18.449 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:28:18.449 | 99.99th=[ 5403] 00:28:18.449 bw ( KiB/s): min=18432, max=163840, per=2.63%, avg=90529.10, stdev=61656.72, samples=10 00:28:18.449 iops : min= 18, max= 160, avg=88.40, stdev=60.22, samples=10 00:28:18.449 lat (msec) : 50=0.18%, 100=0.35%, 250=0.70%, 500=0.70%, 750=1.23% 00:28:18.449 lat (msec) : 1000=25.96%, 2000=57.19%, >=2000=13.68% 00:28:18.449 cpu : usr=0.01%, sys=1.49%, ctx=939, majf=0, minf=32769 00:28:18.449 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=88.9% 00:28:18.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.449 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.449 issued rwts: total=570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.449 job3: (groupid=0, jobs=1): err= 0: pid=3458245: Wed Nov 27 05:45:13 2024 00:28:18.449 read: IOPS=65, BW=65.3MiB/s (68.5MB/s)(657MiB/10060msec) 00:28:18.449 slat (usec): min=43, max=2081.3k, avg=15216.01, stdev=137489.44 00:28:18.449 clat (msec): min=59, max=7505, avg=1872.88, stdev=2475.09 00:28:18.449 lat (msec): min=59, max=7507, avg=1888.10, stdev=2484.14 00:28:18.449 clat percentiles (msec): 00:28:18.449 | 1.00th=[ 65], 5.00th=[ 194], 10.00th=[ 334], 20.00th=[ 542], 00:28:18.449 | 30.00th=[ 584], 40.00th=[ 600], 50.00th=[ 625], 60.00th=[ 642], 00:28:18.449 | 70.00th=[ 726], 80.00th=[ 3205], 90.00th=[ 7215], 95.00th=[ 7416], 00:28:18.449 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 7483], 99.95th=[ 7483], 00:28:18.449 | 99.99th=[ 7483] 00:28:18.449 bw ( KiB/s): min= 4096, max=225280, per=2.69%, avg=92842.67, stdev=84887.07, samples=9 00:28:18.449 iops : min= 4, max= 220, avg=90.67, stdev=82.90, samples=9 00:28:18.449 lat (msec) : 100=2.28%, 250=4.72%, 500=9.13%, 750=54.19%, 1000=3.96% 00:28:18.449 lat (msec) : 2000=0.91%, >=2000=24.81% 00:28:18.449 cpu : usr=0.02%, sys=1.54%, ctx=707, majf=0, minf=32769 00:28:18.449 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:28:18.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.449 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.449 issued rwts: total=657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.449 job3: (groupid=0, jobs=1): err= 0: pid=3458246: Wed Nov 27 05:45:13 2024 00:28:18.449 read: IOPS=72, BW=72.5MiB/s (76.0MB/s)(729MiB/10054msec) 00:28:18.449 slat (usec): min=53, max=2079.1k, avg=13713.70, stdev=129219.92 00:28:18.449 clat (msec): min=51, max=6917, avg=987.82, stdev=1457.12 00:28:18.449 lat (msec): min=60, max=6923, avg=1001.53, stdev=1473.02 00:28:18.449 clat percentiles (msec): 00:28:18.449 | 1.00th=[ 102], 5.00th=[ 347], 10.00th=[ 514], 20.00th=[ 523], 00:28:18.449 | 30.00th=[ 527], 40.00th=[ 535], 50.00th=[ 542], 60.00th=[ 634], 00:28:18.449 | 70.00th=[ 760], 80.00th=[ 818], 90.00th=[ 927], 95.00th=[ 6812], 00:28:18.449 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:28:18.449 | 99.99th=[ 6946] 00:28:18.449 bw ( KiB/s): min=65405, max=251904, per=5.10%, avg=176037.43, stdev=75792.57, samples=7 00:28:18.449 iops : min= 63, max= 246, avg=171.57, stdev=74.12, samples=7 00:28:18.449 lat (msec) : 100=0.96%, 250=2.61%, 500=4.53%, 750=58.85%, 1000=26.20% 00:28:18.449 lat (msec) : >=2000=6.86% 00:28:18.449 cpu : usr=0.03%, sys=1.97%, ctx=1417, majf=0, minf=32769 00:28:18.449 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:28:18.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.449 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.449 issued rwts: total=729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.449 job3: (groupid=0, jobs=1): err= 0: pid=3458247: Wed Nov 27 05:45:13 2024 00:28:18.449 read: IOPS=60, BW=60.2MiB/s (63.1MB/s)(609MiB/10114msec) 00:28:18.449 slat (usec): min=47, max=2043.9k, avg=16450.73, stdev=141229.75 00:28:18.449 clat (msec): min=91, max=4853, avg=1500.17, stdev=1485.93 00:28:18.449 lat (msec): min=198, max=4861, avg=1516.62, stdev=1493.00 00:28:18.449 clat percentiles (msec): 00:28:18.449 | 1.00th=[ 426], 5.00th=[ 426], 10.00th=[ 430], 20.00th=[ 443], 00:28:18.449 | 30.00th=[ 464], 40.00th=[ 531], 50.00th=[ 634], 60.00th=[ 735], 00:28:18.449 | 70.00th=[ 1368], 80.00th=[ 3339], 90.00th=[ 3842], 95.00th=[ 4732], 00:28:18.449 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:28:18.449 | 99.99th=[ 4866] 00:28:18.449 bw ( KiB/s): min=14336, max=313344, per=3.57%, avg=123136.00, stdev=113587.87, samples=8 00:28:18.449 iops : min= 14, max= 306, avg=120.25, stdev=110.93, samples=8 00:28:18.449 lat (msec) : 100=0.16%, 250=0.16%, 500=37.11%, 750=23.15%, 1000=5.58% 00:28:18.449 lat (msec) : 2000=4.76%, >=2000=29.06% 00:28:18.449 cpu : usr=0.03%, sys=1.88%, ctx=758, majf=0, minf=32769 00:28:18.449 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.7% 00:28:18.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.449 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.449 issued rwts: total=609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.449 job3: (groupid=0, jobs=1): err= 0: pid=3458248: Wed Nov 27 05:45:13 2024 00:28:18.449 read: IOPS=87, BW=87.7MiB/s (91.9MB/s)(880MiB/10037msec) 00:28:18.449 slat (usec): min=63, max=2152.0k, avg=11358.70, stdev=121527.98 00:28:18.449 clat (msec): min=33, max=7077, avg=1388.21, stdev=2187.88 00:28:18.449 lat (msec): min=36, max=7080, avg=1399.57, stdev=2196.33 00:28:18.449 clat percentiles (msec): 00:28:18.449 | 1.00th=[ 53], 5.00th=[ 136], 10.00th=[ 230], 20.00th=[ 292], 00:28:18.449 | 30.00th=[ 292], 40.00th=[ 321], 50.00th=[ 418], 60.00th=[ 709], 00:28:18.449 | 70.00th=[ 726], 80.00th=[ 776], 90.00th=[ 6946], 95.00th=[ 7013], 00:28:18.449 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:28:18.449 | 99.99th=[ 7080] 00:28:18.449 bw ( KiB/s): min=26624, max=438272, per=4.97%, avg=171349.33, stdev=153743.29, samples=9 00:28:18.449 iops : min= 26, max= 428, avg=167.33, stdev=150.14, samples=9 00:28:18.449 lat (msec) : 50=0.68%, 100=2.50%, 250=7.50%, 500=43.86%, 750=20.34% 00:28:18.449 lat (msec) : 1000=7.61%, >=2000=17.50% 00:28:18.449 cpu : usr=0.04%, sys=2.32%, ctx=719, majf=0, minf=32769 00:28:18.449 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.8% 00:28:18.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.449 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.449 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.449 job3: (groupid=0, jobs=1): err= 0: pid=3458249: Wed Nov 27 05:45:13 2024 00:28:18.449 read: IOPS=20, BW=20.5MiB/s (21.5MB/s)(206MiB/10040msec) 00:28:18.449 slat (usec): min=42, max=2092.8k, avg=48547.02, stdev=278275.58 00:28:18.449 clat (msec): min=38, max=9269, avg=1092.31, stdev=1606.56 00:28:18.449 lat (msec): min=40, max=9319, avg=1140.86, stdev=1705.66 00:28:18.449 clat percentiles (msec): 00:28:18.449 | 1.00th=[ 42], 5.00th=[ 84], 10.00th=[ 142], 20.00th=[ 393], 00:28:18.449 | 30.00th=[ 584], 40.00th=[ 751], 50.00th=[ 885], 60.00th=[ 919], 00:28:18.449 | 70.00th=[ 969], 80.00th=[ 1011], 90.00th=[ 1036], 95.00th=[ 3071], 00:28:18.449 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:28:18.449 | 99.99th=[ 9329] 00:28:18.450 bw ( KiB/s): min=51200, max=110151, per=2.34%, avg=80675.50, stdev=41684.65, samples=2 00:28:18.450 iops : min= 50, max= 107, avg=78.50, stdev=40.31, samples=2 00:28:18.450 lat (msec) : 50=2.91%, 100=3.88%, 250=7.77%, 500=10.19%, 750=15.53% 00:28:18.450 lat (msec) : 1000=33.98%, 2000=18.45%, >=2000=7.28% 00:28:18.450 cpu : usr=0.00%, sys=0.90%, ctx=577, majf=0, minf=32769 00:28:18.450 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.9%, 16=7.8%, 32=15.5%, >=64=69.4% 00:28:18.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.450 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:28:18.450 issued rwts: total=206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.450 job4: (groupid=0, jobs=1): err= 0: pid=3458250: Wed Nov 27 05:45:13 2024 00:28:18.450 read: IOPS=50, BW=50.5MiB/s (53.0MB/s)(508MiB/10058msec) 00:28:18.450 slat (usec): min=34, max=2088.7k, avg=19691.59, stdev=156098.36 00:28:18.450 clat (msec): min=51, max=7602, avg=1545.12, stdev=1486.67 00:28:18.450 lat (msec): min=80, max=9536, avg=1564.81, stdev=1508.14 00:28:18.450 clat percentiles (msec): 00:28:18.450 | 1.00th=[ 89], 5.00th=[ 146], 10.00th=[ 245], 20.00th=[ 405], 00:28:18.450 | 30.00th=[ 472], 40.00th=[ 785], 50.00th=[ 877], 60.00th=[ 953], 00:28:18.450 | 70.00th=[ 1167], 80.00th=[ 3473], 90.00th=[ 3675], 95.00th=[ 3775], 00:28:18.450 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 7617], 99.95th=[ 7617], 00:28:18.450 | 99.99th=[ 7617] 00:28:18.450 bw ( KiB/s): min=49152, max=163840, per=2.87%, avg=99123.20, stdev=49462.49, samples=5 00:28:18.450 iops : min= 48, max= 160, avg=96.80, stdev=48.30, samples=5 00:28:18.450 lat (msec) : 100=2.95%, 250=8.27%, 500=19.29%, 750=4.92%, 1000=31.10% 00:28:18.450 lat (msec) : 2000=4.13%, >=2000=29.33% 00:28:18.450 cpu : usr=0.07%, sys=1.51%, ctx=585, majf=0, minf=32769 00:28:18.450 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:28:18.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.450 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:18.450 issued rwts: total=508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.450 job4: (groupid=0, jobs=1): err= 0: pid=3458251: Wed Nov 27 05:45:13 2024 00:28:18.450 read: IOPS=19, BW=19.4MiB/s (20.4MB/s)(210MiB/10797msec) 00:28:18.450 slat (usec): min=655, max=2098.5k, avg=50950.25, stdev=275943.55 00:28:18.450 clat (msec): min=95, max=9974, avg=6176.01, stdev=3659.05 00:28:18.450 lat (msec): min=1413, max=9988, avg=6226.96, stdev=3639.94 00:28:18.450 clat percentiles (msec): 00:28:18.450 | 1.00th=[ 1418], 5.00th=[ 1502], 10.00th=[ 1586], 20.00th=[ 1636], 00:28:18.450 | 30.00th=[ 1703], 40.00th=[ 5738], 50.00th=[ 8658], 60.00th=[ 8926], 00:28:18.450 | 70.00th=[ 9194], 80.00th=[ 9463], 90.00th=[ 9731], 95.00th=[ 9866], 00:28:18.450 | 99.00th=[ 9866], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:28:18.450 | 99.99th=[10000] 00:28:18.450 bw ( KiB/s): min= 4096, max=86016, per=0.81%, avg=27989.33, stdev=33543.88, samples=6 00:28:18.450 iops : min= 4, max= 84, avg=27.33, stdev=32.76, samples=6 00:28:18.450 lat (msec) : 100=0.48%, 2000=35.24%, >=2000=64.29% 00:28:18.450 cpu : usr=0.00%, sys=1.44%, ctx=444, majf=0, minf=32769 00:28:18.450 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.6%, 32=15.2%, >=64=70.0% 00:28:18.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.450 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:28:18.450 issued rwts: total=210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.450 job4: (groupid=0, jobs=1): err= 0: pid=3458252: Wed Nov 27 05:45:13 2024 00:28:18.450 read: IOPS=70, BW=70.6MiB/s (74.1MB/s)(759MiB/10747msec) 00:28:18.450 slat (usec): min=40, max=2031.2k, avg=14042.98, stdev=103931.92 00:28:18.450 clat (msec): min=83, max=5392, avg=1676.38, stdev=1606.56 00:28:18.450 lat (msec): min=283, max=5394, avg=1690.43, stdev=1610.22 00:28:18.450 clat percentiles (msec): 00:28:18.450 | 1.00th=[ 284], 5.00th=[ 284], 10.00th=[ 288], 20.00th=[ 288], 00:28:18.450 | 30.00th=[ 321], 40.00th=[ 481], 50.00th=[ 1418], 60.00th=[ 1838], 00:28:18.450 | 70.00th=[ 2022], 80.00th=[ 2123], 90.00th=[ 5201], 95.00th=[ 5336], 00:28:18.450 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:28:18.450 | 99.99th=[ 5403] 00:28:18.450 bw ( KiB/s): min= 1408, max=454656, per=3.13%, avg=107808.00, stdev=137917.32, samples=12 00:28:18.450 iops : min= 1, max= 444, avg=105.25, stdev=134.71, samples=12 00:28:18.450 lat (msec) : 100=0.13%, 500=40.32%, 750=1.19%, 1000=2.24%, 2000=23.32% 00:28:18.450 lat (msec) : >=2000=32.81% 00:28:18.450 cpu : usr=0.03%, sys=1.57%, ctx=1719, majf=0, minf=32769 00:28:18.450 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:28:18.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.450 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.450 issued rwts: total=759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.450 job4: (groupid=0, jobs=1): err= 0: pid=3458253: Wed Nov 27 05:45:13 2024 00:28:18.450 read: IOPS=13, BW=13.9MiB/s (14.6MB/s)(149MiB/10715msec) 00:28:18.450 slat (usec): min=1161, max=2155.0k, avg=71346.30, stdev=327029.80 00:28:18.450 clat (msec): min=83, max=10440, avg=8209.81, stdev=2508.70 00:28:18.450 lat (msec): min=1962, max=10445, avg=8281.16, stdev=2424.25 00:28:18.450 clat percentiles (msec): 00:28:18.450 | 1.00th=[ 1955], 5.00th=[ 2056], 10.00th=[ 2165], 20.00th=[ 8490], 00:28:18.450 | 30.00th=[ 8658], 40.00th=[ 8926], 50.00th=[ 9060], 60.00th=[ 9329], 00:28:18.450 | 70.00th=[ 9463], 80.00th=[ 9731], 90.00th=[10000], 95.00th=[10268], 00:28:18.450 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:18.450 | 99.99th=[10402] 00:28:18.450 bw ( KiB/s): min= 1464, max=20480, per=0.26%, avg=8894.40, stdev=7327.44, samples=5 00:28:18.450 iops : min= 1, max= 20, avg= 8.60, stdev= 7.27, samples=5 00:28:18.450 lat (msec) : 100=0.67%, 2000=1.34%, >=2000=97.99% 00:28:18.450 cpu : usr=0.01%, sys=0.99%, ctx=552, majf=0, minf=32769 00:28:18.450 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=5.4%, 16=10.7%, 32=21.5%, >=64=57.7% 00:28:18.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.450 complete : 0=0.0%, 4=95.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.3% 00:28:18.450 issued rwts: total=149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.450 job4: (groupid=0, jobs=1): err= 0: pid=3458254: Wed Nov 27 05:45:13 2024 00:28:18.450 read: IOPS=33, BW=33.1MiB/s (34.7MB/s)(355MiB/10734msec) 00:28:18.450 slat (usec): min=101, max=2071.7k, avg=29962.52, stdev=183180.45 00:28:18.450 clat (msec): min=95, max=7367, avg=3350.32, stdev=2536.81 00:28:18.450 lat (msec): min=1091, max=7399, avg=3380.29, stdev=2533.68 00:28:18.450 clat percentiles (msec): 00:28:18.450 | 1.00th=[ 1099], 5.00th=[ 1167], 10.00th=[ 1200], 20.00th=[ 1234], 00:28:18.450 | 30.00th=[ 1284], 40.00th=[ 1452], 50.00th=[ 1770], 60.00th=[ 2299], 00:28:18.450 | 70.00th=[ 6342], 80.00th=[ 6678], 90.00th=[ 7013], 95.00th=[ 7282], 00:28:18.450 | 99.00th=[ 7349], 99.50th=[ 7349], 99.90th=[ 7349], 99.95th=[ 7349], 00:28:18.450 | 99.99th=[ 7349] 00:28:18.450 bw ( KiB/s): min= 1458, max=112415, per=1.69%, avg=58266.13, stdev=49383.67, samples=8 00:28:18.450 iops : min= 1, max= 109, avg=56.75, stdev=48.17, samples=8 00:28:18.450 lat (msec) : 100=0.28%, 2000=53.52%, >=2000=46.20% 00:28:18.450 cpu : usr=0.00%, sys=1.01%, ctx=1183, majf=0, minf=32769 00:28:18.450 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.3% 00:28:18.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.450 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:18.450 issued rwts: total=355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.450 job4: (groupid=0, jobs=1): err= 0: pid=3458255: Wed Nov 27 05:45:13 2024 00:28:18.450 read: IOPS=5, BW=5777KiB/s (5915kB/s)(60.0MiB/10636msec) 00:28:18.450 slat (usec): min=601, max=2058.4k, avg=175551.98, stdev=549492.46 00:28:18.450 clat (msec): min=102, max=10570, avg=4829.30, stdev=3131.14 00:28:18.450 lat (msec): min=2024, max=10635, avg=5004.86, stdev=3156.78 00:28:18.451 clat percentiles (msec): 00:28:18.451 | 1.00th=[ 103], 5.00th=[ 2022], 10.00th=[ 2039], 20.00th=[ 2039], 00:28:18.451 | 30.00th=[ 2165], 40.00th=[ 2165], 50.00th=[ 2232], 60.00th=[ 6342], 00:28:18.451 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[10537], 00:28:18.451 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:28:18.451 | 99.99th=[10537] 00:28:18.451 lat (msec) : 250=1.67%, >=2000=98.33% 00:28:18.451 cpu : usr=0.01%, sys=0.48%, ctx=94, majf=0, minf=15361 00:28:18.451 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:28:18.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.451 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:18.451 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.451 job4: (groupid=0, jobs=1): err= 0: pid=3458256: Wed Nov 27 05:45:13 2024 00:28:18.451 read: IOPS=49, BW=49.9MiB/s (52.3MB/s)(535MiB/10732msec) 00:28:18.451 slat (usec): min=40, max=2169.1k, avg=19897.19, stdev=153037.40 00:28:18.451 clat (msec): min=83, max=6396, avg=2418.09, stdev=1837.98 00:28:18.451 lat (msec): min=619, max=6404, avg=2437.99, stdev=1845.92 00:28:18.451 clat percentiles (msec): 00:28:18.451 | 1.00th=[ 617], 5.00th=[ 634], 10.00th=[ 676], 20.00th=[ 743], 00:28:18.451 | 30.00th=[ 927], 40.00th=[ 1200], 50.00th=[ 1603], 60.00th=[ 2433], 00:28:18.451 | 70.00th=[ 2802], 80.00th=[ 5067], 90.00th=[ 5537], 95.00th=[ 5873], 00:28:18.451 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6409], 99.95th=[ 6409], 00:28:18.451 | 99.99th=[ 6409] 00:28:18.451 bw ( KiB/s): min= 1434, max=208479, per=2.20%, avg=75854.64, stdev=71269.92, samples=11 00:28:18.451 iops : min= 1, max= 203, avg=73.91, stdev=69.53, samples=11 00:28:18.451 lat (msec) : 100=0.19%, 750=20.75%, 1000=10.84%, 2000=19.07%, >=2000=49.16% 00:28:18.451 cpu : usr=0.04%, sys=1.38%, ctx=829, majf=0, minf=32769 00:28:18.451 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:28:18.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.451 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.451 issued rwts: total=535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.451 job4: (groupid=0, jobs=1): err= 0: pid=3458257: Wed Nov 27 05:45:13 2024 00:28:18.451 read: IOPS=21, BW=21.6MiB/s (22.7MB/s)(232MiB/10730msec) 00:28:18.451 slat (usec): min=467, max=2063.4k, avg=45882.19, stdev=245499.69 00:28:18.451 clat (msec): min=83, max=9007, avg=5350.27, stdev=1904.66 00:28:18.451 lat (msec): min=1993, max=9009, avg=5396.16, stdev=1882.76 00:28:18.451 clat percentiles (msec): 00:28:18.451 | 1.00th=[ 2056], 5.00th=[ 2198], 10.00th=[ 3339], 20.00th=[ 3742], 00:28:18.451 | 30.00th=[ 4010], 40.00th=[ 4212], 50.00th=[ 6007], 60.00th=[ 6074], 00:28:18.451 | 70.00th=[ 6208], 80.00th=[ 6342], 90.00th=[ 8658], 95.00th=[ 8792], 00:28:18.451 | 99.00th=[ 8926], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:28:18.451 | 99.99th=[ 9060] 00:28:18.451 bw ( KiB/s): min= 1434, max=122880, per=1.04%, avg=35737.67, stdev=46552.88, samples=6 00:28:18.451 iops : min= 1, max= 120, avg=34.83, stdev=45.52, samples=6 00:28:18.451 lat (msec) : 100=0.43%, 2000=0.43%, >=2000=99.14% 00:28:18.451 cpu : usr=0.02%, sys=1.09%, ctx=799, majf=0, minf=32769 00:28:18.451 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.9%, 32=13.8%, >=64=72.8% 00:28:18.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.451 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:28:18.451 issued rwts: total=232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.451 job4: (groupid=0, jobs=1): err= 0: pid=3458258: Wed Nov 27 05:45:13 2024 00:28:18.451 read: IOPS=29, BW=29.7MiB/s (31.1MB/s)(300MiB/10109msec) 00:28:18.451 slat (usec): min=69, max=2134.5k, avg=33331.90, stdev=206715.27 00:28:18.451 clat (msec): min=107, max=9528, avg=3957.32, stdev=3741.56 00:28:18.451 lat (msec): min=109, max=9529, avg=3990.65, stdev=3748.09 00:28:18.451 clat percentiles (msec): 00:28:18.451 | 1.00th=[ 112], 5.00th=[ 211], 10.00th=[ 313], 20.00th=[ 523], 00:28:18.451 | 30.00th=[ 743], 40.00th=[ 953], 50.00th=[ 1062], 60.00th=[ 7483], 00:28:18.451 | 70.00th=[ 8020], 80.00th=[ 8423], 90.00th=[ 8658], 95.00th=[ 8792], 00:28:18.451 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:28:18.451 | 99.99th=[ 9463] 00:28:18.451 bw ( KiB/s): min= 2048, max=137216, per=1.71%, avg=59050.67, stdev=63129.14, samples=6 00:28:18.451 iops : min= 2, max= 134, avg=57.67, stdev=61.65, samples=6 00:28:18.451 lat (msec) : 250=7.67%, 500=10.33%, 750=12.33%, 1000=13.67%, 2000=10.00% 00:28:18.451 lat (msec) : >=2000=46.00% 00:28:18.451 cpu : usr=0.01%, sys=1.30%, ctx=768, majf=0, minf=32331 00:28:18.451 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.3%, 32=10.7%, >=64=79.0% 00:28:18.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.451 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:18.451 issued rwts: total=300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.451 job4: (groupid=0, jobs=1): err= 0: pid=3458259: Wed Nov 27 05:45:13 2024 00:28:18.451 read: IOPS=17, BW=17.8MiB/s (18.6MB/s)(189MiB/10643msec) 00:28:18.451 slat (usec): min=86, max=2100.5k, avg=55865.14, stdev=291133.43 00:28:18.451 clat (msec): min=83, max=9657, avg=6474.13, stdev=3376.42 00:28:18.451 lat (msec): min=1233, max=9666, avg=6530.00, stdev=3345.31 00:28:18.451 clat percentiles (msec): 00:28:18.451 | 1.00th=[ 1234], 5.00th=[ 1250], 10.00th=[ 1318], 20.00th=[ 1905], 00:28:18.451 | 30.00th=[ 3339], 40.00th=[ 7684], 50.00th=[ 8658], 60.00th=[ 8926], 00:28:18.451 | 70.00th=[ 9060], 80.00th=[ 9329], 90.00th=[ 9463], 95.00th=[ 9597], 00:28:18.451 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:28:18.451 | 99.99th=[ 9597] 00:28:18.451 bw ( KiB/s): min= 2048, max=59392, per=0.53%, avg=18140.00, stdev=20253.92, samples=7 00:28:18.451 iops : min= 2, max= 58, avg=17.71, stdev=19.78, samples=7 00:28:18.451 lat (msec) : 100=0.53%, 2000=23.28%, >=2000=76.19% 00:28:18.451 cpu : usr=0.02%, sys=1.17%, ctx=441, majf=0, minf=32769 00:28:18.451 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.5%, 32=16.9%, >=64=66.7% 00:28:18.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.451 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:28:18.451 issued rwts: total=189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.451 job4: (groupid=0, jobs=1): err= 0: pid=3458260: Wed Nov 27 05:45:13 2024 00:28:18.451 read: IOPS=22, BW=22.5MiB/s (23.6MB/s)(239MiB/10604msec) 00:28:18.451 slat (usec): min=88, max=2109.8k, avg=44007.03, stdev=254486.76 00:28:18.451 clat (msec): min=83, max=9511, avg=5303.15, stdev=3517.25 00:28:18.451 lat (msec): min=1251, max=9514, avg=5347.16, stdev=3506.85 00:28:18.451 clat percentiles (msec): 00:28:18.451 | 1.00th=[ 1250], 5.00th=[ 1267], 10.00th=[ 1301], 20.00th=[ 1318], 00:28:18.451 | 30.00th=[ 1368], 40.00th=[ 3272], 50.00th=[ 6275], 60.00th=[ 8423], 00:28:18.451 | 70.00th=[ 8792], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9329], 00:28:18.451 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:28:18.451 | 99.99th=[ 9463] 00:28:18.451 bw ( KiB/s): min= 2048, max=90112, per=0.83%, avg=28672.50, stdev=31632.43, samples=8 00:28:18.451 iops : min= 2, max= 88, avg=28.00, stdev=30.89, samples=8 00:28:18.451 lat (msec) : 100=0.42%, 2000=36.82%, >=2000=62.76% 00:28:18.451 cpu : usr=0.01%, sys=1.19%, ctx=423, majf=0, minf=32769 00:28:18.451 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.7%, 32=13.4%, >=64=73.6% 00:28:18.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.451 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:28:18.451 issued rwts: total=239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.451 job4: (groupid=0, jobs=1): err= 0: pid=3458261: Wed Nov 27 05:45:13 2024 00:28:18.451 read: IOPS=47, BW=47.5MiB/s (49.8MB/s)(506MiB/10663msec) 00:28:18.451 slat (usec): min=60, max=2019.6k, avg=20877.06, stdev=126759.46 00:28:18.451 clat (msec): min=95, max=4250, avg=2390.70, stdev=1062.69 00:28:18.451 lat (msec): min=999, max=4252, avg=2411.58, stdev=1057.76 00:28:18.451 clat percentiles (msec): 00:28:18.452 | 1.00th=[ 1003], 5.00th=[ 1036], 10.00th=[ 1083], 20.00th=[ 1200], 00:28:18.452 | 30.00th=[ 1502], 40.00th=[ 1620], 50.00th=[ 2198], 60.00th=[ 3071], 00:28:18.452 | 70.00th=[ 3239], 80.00th=[ 3507], 90.00th=[ 3608], 95.00th=[ 4044], 00:28:18.452 | 99.00th=[ 4245], 99.50th=[ 4245], 99.90th=[ 4245], 99.95th=[ 4245], 00:28:18.452 | 99.99th=[ 4245] 00:28:18.452 bw ( KiB/s): min= 2052, max=139264, per=1.88%, avg=64683.00, stdev=45160.81, samples=12 00:28:18.452 iops : min= 2, max= 136, avg=63.17, stdev=44.10, samples=12 00:28:18.452 lat (msec) : 100=0.20%, 1000=0.40%, 2000=45.85%, >=2000=53.56% 00:28:18.452 cpu : usr=0.00%, sys=1.29%, ctx=1255, majf=0, minf=32769 00:28:18.452 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.5% 00:28:18.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.452 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:18.452 issued rwts: total=506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.452 job4: (groupid=0, jobs=1): err= 0: pid=3458262: Wed Nov 27 05:45:13 2024 00:28:18.452 read: IOPS=22, BW=22.2MiB/s (23.3MB/s)(237MiB/10676msec) 00:28:18.452 slat (usec): min=102, max=2060.6k, avg=44648.59, stdev=256432.17 00:28:18.452 clat (msec): min=92, max=9358, avg=5283.73, stdev=3222.07 00:28:18.452 lat (msec): min=1235, max=9369, avg=5328.37, stdev=3211.55 00:28:18.452 clat percentiles (msec): 00:28:18.452 | 1.00th=[ 1234], 5.00th=[ 1301], 10.00th=[ 1334], 20.00th=[ 1401], 00:28:18.452 | 30.00th=[ 2022], 40.00th=[ 3406], 50.00th=[ 5403], 60.00th=[ 7416], 00:28:18.452 | 70.00th=[ 8658], 80.00th=[ 8926], 90.00th=[ 9194], 95.00th=[ 9329], 00:28:18.452 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:28:18.452 | 99.99th=[ 9329] 00:28:18.452 bw ( KiB/s): min= 1610, max=100352, per=0.82%, avg=28105.25, stdev=31755.73, samples=8 00:28:18.452 iops : min= 1, max= 98, avg=27.38, stdev=31.08, samples=8 00:28:18.452 lat (msec) : 100=0.42%, 2000=28.69%, >=2000=70.89% 00:28:18.452 cpu : usr=0.01%, sys=1.12%, ctx=572, majf=0, minf=32769 00:28:18.452 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.5%, >=64=73.4% 00:28:18.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.452 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:28:18.452 issued rwts: total=237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.452 job5: (groupid=0, jobs=1): err= 0: pid=3458263: Wed Nov 27 05:45:13 2024 00:28:18.452 read: IOPS=39, BW=39.8MiB/s (41.7MB/s)(401MiB/10081msec) 00:28:18.452 slat (usec): min=506, max=1883.8k, avg=24966.94, stdev=95488.45 00:28:18.452 clat (msec): min=66, max=5913, avg=2089.68, stdev=1227.69 00:28:18.452 lat (msec): min=92, max=5970, avg=2114.65, stdev=1243.15 00:28:18.452 clat percentiles (msec): 00:28:18.452 | 1.00th=[ 104], 5.00th=[ 372], 10.00th=[ 718], 20.00th=[ 1301], 00:28:18.452 | 30.00th=[ 1435], 40.00th=[ 1603], 50.00th=[ 1687], 60.00th=[ 1854], 00:28:18.452 | 70.00th=[ 2668], 80.00th=[ 3272], 90.00th=[ 4044], 95.00th=[ 4245], 00:28:18.452 | 99.00th=[ 5805], 99.50th=[ 5873], 99.90th=[ 5940], 99.95th=[ 5940], 00:28:18.452 | 99.99th=[ 5940] 00:28:18.452 bw ( KiB/s): min=14336, max=116736, per=1.80%, avg=62118.56, stdev=36583.98, samples=9 00:28:18.452 iops : min= 14, max= 114, avg=60.56, stdev=35.87, samples=9 00:28:18.452 lat (msec) : 100=0.75%, 250=2.24%, 500=3.74%, 750=4.24%, 1000=3.99% 00:28:18.452 lat (msec) : 2000=51.62%, >=2000=33.42% 00:28:18.452 cpu : usr=0.02%, sys=1.17%, ctx=1435, majf=0, minf=32769 00:28:18.452 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.3% 00:28:18.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.452 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:18.452 issued rwts: total=401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.452 job5: (groupid=0, jobs=1): err= 0: pid=3458264: Wed Nov 27 05:45:13 2024 00:28:18.452 read: IOPS=90, BW=90.3MiB/s (94.7MB/s)(961MiB/10642msec) 00:28:18.452 slat (usec): min=34, max=2181.8k, avg=10979.49, stdev=103539.32 00:28:18.452 clat (msec): min=87, max=6240, avg=1145.57, stdev=1675.19 00:28:18.452 lat (msec): min=282, max=6242, avg=1156.55, stdev=1683.24 00:28:18.452 clat percentiles (msec): 00:28:18.452 | 1.00th=[ 284], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 288], 00:28:18.452 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 292], 60.00th=[ 300], 00:28:18.452 | 70.00th=[ 493], 80.00th=[ 1703], 90.00th=[ 4732], 95.00th=[ 5336], 00:28:18.452 | 99.00th=[ 6141], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:28:18.452 | 99.99th=[ 6208] 00:28:18.452 bw ( KiB/s): min= 2052, max=456704, per=4.13%, avg=142336.33, stdev=178641.63, samples=12 00:28:18.452 iops : min= 2, max= 446, avg=139.00, stdev=174.45, samples=12 00:28:18.452 lat (msec) : 100=0.10%, 500=70.55%, 750=3.54%, 1000=1.04%, 2000=7.49% 00:28:18.452 lat (msec) : >=2000=17.27% 00:28:18.452 cpu : usr=0.01%, sys=1.65%, ctx=1314, majf=0, minf=32769 00:28:18.452 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:28:18.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.452 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.452 issued rwts: total=961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.452 job5: (groupid=0, jobs=1): err= 0: pid=3458265: Wed Nov 27 05:45:13 2024 00:28:18.452 read: IOPS=52, BW=52.5MiB/s (55.0MB/s)(557MiB/10613msec) 00:28:18.452 slat (usec): min=37, max=2048.8k, avg=18882.36, stdev=156896.53 00:28:18.452 clat (msec): min=90, max=4735, avg=1565.01, stdev=1528.69 00:28:18.452 lat (msec): min=424, max=4751, avg=1583.89, stdev=1537.36 00:28:18.452 clat percentiles (msec): 00:28:18.452 | 1.00th=[ 426], 5.00th=[ 426], 10.00th=[ 430], 20.00th=[ 435], 00:28:18.452 | 30.00th=[ 451], 40.00th=[ 535], 50.00th=[ 667], 60.00th=[ 743], 00:28:18.452 | 70.00th=[ 1938], 80.00th=[ 3071], 90.00th=[ 4530], 95.00th=[ 4665], 00:28:18.452 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:28:18.452 | 99.99th=[ 4732] 00:28:18.452 bw ( KiB/s): min= 2052, max=294912, per=3.19%, avg=110056.38, stdev=113787.35, samples=8 00:28:18.452 iops : min= 2, max= 288, avg=107.38, stdev=111.14, samples=8 00:28:18.452 lat (msec) : 100=0.18%, 500=35.55%, 750=24.78%, 1000=2.15%, 2000=8.26% 00:28:18.452 lat (msec) : >=2000=29.08% 00:28:18.452 cpu : usr=0.03%, sys=1.65%, ctx=681, majf=0, minf=32769 00:28:18.452 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:28:18.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.452 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.452 issued rwts: total=557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.452 job5: (groupid=0, jobs=1): err= 0: pid=3458266: Wed Nov 27 05:45:13 2024 00:28:18.452 read: IOPS=156, BW=157MiB/s (165MB/s)(1676MiB/10680msec) 00:28:18.452 slat (usec): min=38, max=2009.4k, avg=6312.70, stdev=75155.50 00:28:18.452 clat (msec): min=87, max=2653, avg=779.67, stdev=812.10 00:28:18.452 lat (msec): min=282, max=2655, avg=785.98, stdev=814.43 00:28:18.452 clat percentiles (msec): 00:28:18.452 | 1.00th=[ 284], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 288], 00:28:18.452 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 418], 60.00th=[ 422], 00:28:18.452 | 70.00th=[ 447], 80.00th=[ 1787], 90.00th=[ 2400], 95.00th=[ 2567], 00:28:18.452 | 99.00th=[ 2635], 99.50th=[ 2635], 99.90th=[ 2668], 99.95th=[ 2668], 00:28:18.452 | 99.99th=[ 2668] 00:28:18.452 bw ( KiB/s): min= 1610, max=456704, per=7.08%, avg=243993.38, stdev=177628.12, samples=13 00:28:18.452 iops : min= 1, max= 446, avg=238.23, stdev=173.53, samples=13 00:28:18.452 lat (msec) : 100=0.06%, 500=75.95%, 750=0.36%, 1000=0.89%, 2000=7.58% 00:28:18.452 lat (msec) : >=2000=15.16% 00:28:18.452 cpu : usr=0.09%, sys=2.79%, ctx=1449, majf=0, minf=32769 00:28:18.452 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:28:18.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.453 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.453 issued rwts: total=1676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.453 job5: (groupid=0, jobs=1): err= 0: pid=3458267: Wed Nov 27 05:45:13 2024 00:28:18.453 read: IOPS=111, BW=111MiB/s (117MB/s)(1114MiB/10015msec) 00:28:18.453 slat (usec): min=43, max=1019.7k, avg=8969.43, stdev=32320.88 00:28:18.453 clat (msec): min=13, max=3540, avg=1076.54, stdev=754.45 00:28:18.453 lat (msec): min=14, max=3674, avg=1085.50, stdev=757.89 00:28:18.453 clat percentiles (msec): 00:28:18.453 | 1.00th=[ 21], 5.00th=[ 153], 10.00th=[ 676], 20.00th=[ 709], 00:28:18.453 | 30.00th=[ 751], 40.00th=[ 827], 50.00th=[ 885], 60.00th=[ 936], 00:28:18.453 | 70.00th=[ 978], 80.00th=[ 1133], 90.00th=[ 2702], 95.00th=[ 3104], 00:28:18.453 | 99.00th=[ 3272], 99.50th=[ 3406], 99.90th=[ 3406], 99.95th=[ 3540], 00:28:18.453 | 99.99th=[ 3540] 00:28:18.453 bw ( KiB/s): min= 8192, max=192512, per=3.28%, avg=113005.56, stdev=56085.83, samples=16 00:28:18.453 iops : min= 8, max= 188, avg=110.31, stdev=54.74, samples=16 00:28:18.453 lat (msec) : 20=0.90%, 50=1.35%, 100=1.26%, 250=3.68%, 500=2.15% 00:28:18.453 lat (msec) : 750=20.56%, 1000=41.74%, 2000=16.97%, >=2000=11.40% 00:28:18.453 cpu : usr=0.10%, sys=2.07%, ctx=1529, majf=0, minf=32769 00:28:18.453 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:28:18.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.453 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.453 issued rwts: total=1114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.453 job5: (groupid=0, jobs=1): err= 0: pid=3458268: Wed Nov 27 05:45:13 2024 00:28:18.453 read: IOPS=48, BW=48.6MiB/s (51.0MB/s)(514MiB/10573msec) 00:28:18.453 slat (usec): min=451, max=2024.0k, avg=20440.43, stdev=153749.02 00:28:18.453 clat (msec): min=62, max=5213, avg=1756.79, stdev=1407.94 00:28:18.453 lat (msec): min=571, max=5223, avg=1777.23, stdev=1414.01 00:28:18.453 clat percentiles (msec): 00:28:18.453 | 1.00th=[ 567], 5.00th=[ 575], 10.00th=[ 584], 20.00th=[ 718], 00:28:18.453 | 30.00th=[ 869], 40.00th=[ 902], 50.00th=[ 969], 60.00th=[ 1062], 00:28:18.453 | 70.00th=[ 3205], 80.00th=[ 3406], 90.00th=[ 3641], 95.00th=[ 5134], 00:28:18.453 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5201], 99.95th=[ 5201], 00:28:18.453 | 99.99th=[ 5201] 00:28:18.453 bw ( KiB/s): min= 4096, max=223232, per=3.27%, avg=112932.57, stdev=81315.56, samples=7 00:28:18.453 iops : min= 4, max= 218, avg=110.29, stdev=79.41, samples=7 00:28:18.453 lat (msec) : 100=0.19%, 750=21.60%, 1000=31.91%, 2000=14.98%, >=2000=31.32% 00:28:18.453 cpu : usr=0.05%, sys=1.88%, ctx=934, majf=0, minf=32769 00:28:18.453 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:28:18.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.453 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:18.453 issued rwts: total=514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.453 job5: (groupid=0, jobs=1): err= 0: pid=3458269: Wed Nov 27 05:45:13 2024 00:28:18.453 read: IOPS=58, BW=58.8MiB/s (61.6MB/s)(592MiB/10076msec) 00:28:18.453 slat (usec): min=181, max=2016.3k, avg=16910.93, stdev=113946.55 00:28:18.453 clat (msec): min=61, max=4170, avg=1725.07, stdev=1322.34 00:28:18.453 lat (msec): min=94, max=5039, avg=1741.98, stdev=1330.72 00:28:18.453 clat percentiles (msec): 00:28:18.453 | 1.00th=[ 174], 5.00th=[ 584], 10.00th=[ 609], 20.00th=[ 743], 00:28:18.453 | 30.00th=[ 852], 40.00th=[ 1003], 50.00th=[ 1036], 60.00th=[ 1133], 00:28:18.453 | 70.00th=[ 1435], 80.00th=[ 3641], 90.00th=[ 3910], 95.00th=[ 3977], 00:28:18.453 | 99.00th=[ 4144], 99.50th=[ 4144], 99.90th=[ 4178], 99.95th=[ 4178], 00:28:18.453 | 99.99th=[ 4178] 00:28:18.453 bw ( KiB/s): min=10240, max=215040, per=2.51%, avg=86384.27, stdev=59635.17, samples=11 00:28:18.453 iops : min= 10, max= 210, avg=84.27, stdev=58.34, samples=11 00:28:18.453 lat (msec) : 100=0.34%, 250=1.01%, 500=2.70%, 750=17.23%, 1000=17.57% 00:28:18.453 lat (msec) : 2000=32.77%, >=2000=28.38% 00:28:18.453 cpu : usr=0.04%, sys=1.51%, ctx=1574, majf=0, minf=32769 00:28:18.453 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.4% 00:28:18.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.453 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.453 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.453 job5: (groupid=0, jobs=1): err= 0: pid=3458270: Wed Nov 27 05:45:13 2024 00:28:18.453 read: IOPS=113, BW=113MiB/s (119MB/s)(1214MiB/10729msec) 00:28:18.453 slat (usec): min=34, max=2026.0k, avg=8747.84, stdev=96603.67 00:28:18.453 clat (msec): min=102, max=5441, avg=695.15, stdev=912.76 00:28:18.453 lat (msec): min=137, max=5444, avg=703.90, stdev=924.38 00:28:18.453 clat percentiles (msec): 00:28:18.453 | 1.00th=[ 138], 5.00th=[ 140], 10.00th=[ 140], 20.00th=[ 140], 00:28:18.453 | 30.00th=[ 142], 40.00th=[ 146], 50.00th=[ 255], 60.00th=[ 567], 00:28:18.453 | 70.00th=[ 802], 80.00th=[ 953], 90.00th=[ 2366], 95.00th=[ 2500], 00:28:18.453 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5470], 99.95th=[ 5470], 00:28:18.453 | 99.99th=[ 5470] 00:28:18.453 bw ( KiB/s): min= 1464, max=929792, per=7.17%, avg=247288.00, stdev=300335.85, samples=9 00:28:18.453 iops : min= 1, max= 908, avg=241.44, stdev=293.34, samples=9 00:28:18.453 lat (msec) : 250=49.18%, 500=8.57%, 750=10.87%, 1000=14.33%, 2000=4.78% 00:28:18.453 lat (msec) : >=2000=12.27% 00:28:18.453 cpu : usr=0.06%, sys=2.26%, ctx=1593, majf=0, minf=32769 00:28:18.453 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:28:18.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.453 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.453 issued rwts: total=1214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.453 job5: (groupid=0, jobs=1): err= 0: pid=3458271: Wed Nov 27 05:45:13 2024 00:28:18.453 read: IOPS=65, BW=65.4MiB/s (68.6MB/s)(658MiB/10055msec) 00:28:18.453 slat (usec): min=422, max=2028.5k, avg=15194.26, stdev=94633.54 00:28:18.453 clat (msec): min=54, max=3919, avg=1607.71, stdev=1168.92 00:28:18.453 lat (msec): min=56, max=3934, avg=1622.90, stdev=1173.59 00:28:18.453 clat percentiles (msec): 00:28:18.453 | 1.00th=[ 97], 5.00th=[ 300], 10.00th=[ 305], 20.00th=[ 321], 00:28:18.453 | 30.00th=[ 709], 40.00th=[ 919], 50.00th=[ 1284], 60.00th=[ 2022], 00:28:18.453 | 70.00th=[ 2198], 80.00th=[ 2937], 90.00th=[ 3272], 95.00th=[ 3675], 00:28:18.453 | 99.00th=[ 3842], 99.50th=[ 3842], 99.90th=[ 3910], 99.95th=[ 3910], 00:28:18.454 | 99.99th=[ 3910] 00:28:18.454 bw ( KiB/s): min= 6144, max=425984, per=2.96%, avg=102195.20, stdev=120831.81, samples=10 00:28:18.454 iops : min= 6, max= 416, avg=99.80, stdev=118.00, samples=10 00:28:18.454 lat (msec) : 100=1.06%, 250=1.37%, 500=24.62%, 750=4.56%, 1000=11.40% 00:28:18.454 lat (msec) : 2000=16.57%, >=2000=40.43% 00:28:18.454 cpu : usr=0.01%, sys=1.31%, ctx=1920, majf=0, minf=32769 00:28:18.454 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:28:18.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.454 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.454 issued rwts: total=658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.454 job5: (groupid=0, jobs=1): err= 0: pid=3458272: Wed Nov 27 05:45:13 2024 00:28:18.454 read: IOPS=45, BW=45.3MiB/s (47.5MB/s)(456MiB/10064msec) 00:28:18.454 slat (usec): min=45, max=1874.1k, avg=21937.12, stdev=89000.82 00:28:18.454 clat (msec): min=57, max=3687, avg=2057.95, stdev=672.98 00:28:18.454 lat (msec): min=81, max=3731, avg=2079.89, stdev=675.53 00:28:18.454 clat percentiles (msec): 00:28:18.454 | 1.00th=[ 94], 5.00th=[ 527], 10.00th=[ 1150], 20.00th=[ 1720], 00:28:18.454 | 30.00th=[ 1804], 40.00th=[ 1871], 50.00th=[ 2106], 60.00th=[ 2400], 00:28:18.454 | 70.00th=[ 2534], 80.00th=[ 2635], 90.00th=[ 2735], 95.00th=[ 2802], 00:28:18.454 | 99.00th=[ 3608], 99.50th=[ 3641], 99.90th=[ 3675], 99.95th=[ 3675], 00:28:18.454 | 99.99th=[ 3675] 00:28:18.454 bw ( KiB/s): min=18432, max=81920, per=1.50%, avg=51830.15, stdev=21984.38, samples=13 00:28:18.454 iops : min= 18, max= 80, avg=50.62, stdev=21.47, samples=13 00:28:18.454 lat (msec) : 100=1.10%, 250=1.75%, 500=1.75%, 750=2.19%, 1000=2.41% 00:28:18.454 lat (msec) : 2000=38.16%, >=2000=52.63% 00:28:18.454 cpu : usr=0.09%, sys=1.36%, ctx=1588, majf=0, minf=32769 00:28:18.454 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.0%, >=64=86.2% 00:28:18.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.454 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:18.454 issued rwts: total=456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.454 job5: (groupid=0, jobs=1): err= 0: pid=3458273: Wed Nov 27 05:45:13 2024 00:28:18.454 read: IOPS=44, BW=44.5MiB/s (46.7MB/s)(473MiB/10618msec) 00:28:18.454 slat (usec): min=46, max=2043.6k, avg=22249.69, stdev=127509.70 00:28:18.454 clat (msec): min=90, max=5222, avg=2671.71, stdev=1188.11 00:28:18.454 lat (msec): min=759, max=5224, avg=2693.96, stdev=1182.53 00:28:18.454 clat percentiles (msec): 00:28:18.454 | 1.00th=[ 776], 5.00th=[ 969], 10.00th=[ 1368], 20.00th=[ 1703], 00:28:18.454 | 30.00th=[ 2005], 40.00th=[ 2299], 50.00th=[ 2400], 60.00th=[ 2500], 00:28:18.454 | 70.00th=[ 3104], 80.00th=[ 3608], 90.00th=[ 4933], 95.00th=[ 5134], 00:28:18.454 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5201], 99.95th=[ 5201], 00:28:18.454 | 99.99th=[ 5201] 00:28:18.454 bw ( KiB/s): min= 2052, max=206848, per=1.58%, avg=54498.15, stdev=50374.71, samples=13 00:28:18.454 iops : min= 2, max= 202, avg=53.08, stdev=49.26, samples=13 00:28:18.454 lat (msec) : 100=0.21%, 750=0.42%, 1000=5.29%, 2000=23.89%, >=2000=70.19% 00:28:18.454 cpu : usr=0.01%, sys=1.40%, ctx=1197, majf=0, minf=32769 00:28:18.454 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:28:18.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.454 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:18.454 issued rwts: total=473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.454 job5: (groupid=0, jobs=1): err= 0: pid=3458274: Wed Nov 27 05:45:13 2024 00:28:18.454 read: IOPS=46, BW=46.8MiB/s (49.0MB/s)(498MiB/10651msec) 00:28:18.454 slat (usec): min=35, max=2189.6k, avg=21177.59, stdev=135131.86 00:28:18.454 clat (msec): min=102, max=4367, avg=2096.80, stdev=963.48 00:28:18.454 lat (msec): min=595, max=4460, avg=2117.97, stdev=964.60 00:28:18.454 clat percentiles (msec): 00:28:18.454 | 1.00th=[ 592], 5.00th=[ 802], 10.00th=[ 986], 20.00th=[ 1183], 00:28:18.454 | 30.00th=[ 1301], 40.00th=[ 1418], 50.00th=[ 2039], 60.00th=[ 2467], 00:28:18.454 | 70.00th=[ 2869], 80.00th=[ 3239], 90.00th=[ 3272], 95.00th=[ 3306], 00:28:18.454 | 99.00th=[ 4212], 99.50th=[ 4329], 99.90th=[ 4396], 99.95th=[ 4396], 00:28:18.454 | 99.99th=[ 4396] 00:28:18.454 bw ( KiB/s): min= 2052, max=176128, per=1.97%, avg=67886.09, stdev=62306.92, samples=11 00:28:18.454 iops : min= 2, max= 172, avg=66.18, stdev=60.77, samples=11 00:28:18.454 lat (msec) : 250=0.20%, 750=3.61%, 1000=7.23%, 2000=38.15%, >=2000=50.80% 00:28:18.454 cpu : usr=0.01%, sys=1.06%, ctx=1531, majf=0, minf=32769 00:28:18.454 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.3% 00:28:18.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.454 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:18.454 issued rwts: total=498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.454 job5: (groupid=0, jobs=1): err= 0: pid=3458275: Wed Nov 27 05:45:13 2024 00:28:18.454 read: IOPS=74, BW=74.7MiB/s (78.3MB/s)(751MiB/10060msec) 00:28:18.454 slat (usec): min=41, max=1973.9k, avg=13315.83, stdev=98165.50 00:28:18.454 clat (msec): min=55, max=3705, avg=1343.97, stdev=1000.45 00:28:18.454 lat (msec): min=60, max=3706, avg=1357.29, stdev=1003.74 00:28:18.454 clat percentiles (msec): 00:28:18.454 | 1.00th=[ 113], 5.00th=[ 372], 10.00th=[ 405], 20.00th=[ 451], 00:28:18.454 | 30.00th=[ 743], 40.00th=[ 877], 50.00th=[ 1200], 60.00th=[ 1250], 00:28:18.454 | 70.00th=[ 1334], 80.00th=[ 1519], 90.00th=[ 3205], 95.00th=[ 3540], 00:28:18.454 | 99.00th=[ 3641], 99.50th=[ 3708], 99.90th=[ 3708], 99.95th=[ 3708], 00:28:18.454 | 99.99th=[ 3708] 00:28:18.454 bw ( KiB/s): min= 8192, max=252408, per=3.09%, avg=106617.92, stdev=76796.01, samples=12 00:28:18.454 iops : min= 8, max= 246, avg=103.92, stdev=74.90, samples=12 00:28:18.454 lat (msec) : 100=0.80%, 250=3.20%, 500=17.18%, 750=9.59%, 1000=12.25% 00:28:18.454 lat (msec) : 2000=37.02%, >=2000=19.97% 00:28:18.454 cpu : usr=0.02%, sys=1.73%, ctx=1971, majf=0, minf=32769 00:28:18.454 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:28:18.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.454 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:18.454 issued rwts: total=751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.454 00:28:18.454 Run status group 0 (all jobs): 00:28:18.454 READ: bw=3368MiB/s (3531MB/s), 1436KiB/s-157MiB/s (1471kB/s-165MB/s), io=35.5GiB (38.1GB), run=10014-10797msec 00:28:18.454 00:28:18.454 Disk stats (read/write): 00:28:18.454 nvme0n1: ios=38902/0, merge=0/0, ticks=7011154/0, in_queue=7011154, util=98.12% 00:28:18.454 nvme1n1: ios=33479/0, merge=0/0, ticks=7580518/0, in_queue=7580518, util=98.53% 00:28:18.454 nvme2n1: ios=31518/0, merge=0/0, ticks=7868364/0, in_queue=7868364, util=98.14% 00:28:18.454 nvme3n1: ios=71619/0, merge=0/0, ticks=7477011/0, in_queue=7477011, util=98.60% 00:28:18.454 nvme4n1: ios=34174/0, merge=0/0, ticks=5077511/0, in_queue=5077511, util=98.96% 00:28:18.454 nvme5n1: ios=78887/0, merge=0/0, ticks=6927140/0, in_queue=6927140, util=98.98% 00:28:18.454 05:45:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:28:18.454 05:45:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:28:18.454 05:45:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:18.454 05:45:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:28:18.712 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:18.712 05:45:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:19.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:19.900 05:45:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:20.831 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:20.831 05:45:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:21.763 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:21.763 05:45:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:22.694 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:22.694 05:45:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:23.626 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:23.626 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:28:23.626 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:23.626 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:23.884 rmmod nvme_rdma 00:28:23.884 rmmod nvme_fabrics 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 3456483 ']' 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 3456483 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 3456483 ']' 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 3456483 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3456483 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3456483' 00:28:23.884 killing process with pid 3456483 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 3456483 00:28:23.884 05:45:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 3456483 00:28:26.414 05:45:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:26.414 05:45:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:26.414 00:28:26.414 real 0m36.790s 00:28:26.414 user 2m1.196s 00:28:26.415 sys 0m19.330s 00:28:26.415 05:45:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.415 05:45:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:26.415 ************************************ 00:28:26.415 END TEST nvmf_srq_overwhelm 00:28:26.415 ************************************ 00:28:26.415 05:45:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:28:26.415 05:45:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:26.415 05:45:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.415 05:45:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:26.415 ************************************ 00:28:26.415 START TEST nvmf_shutdown 00:28:26.415 ************************************ 00:28:26.415 05:45:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:28:26.415 * Looking for test storage... 00:28:26.415 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:26.415 05:45:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:26.415 05:45:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:26.415 05:45:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.673 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:26.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.674 --rc genhtml_branch_coverage=1 00:28:26.674 --rc genhtml_function_coverage=1 00:28:26.674 --rc genhtml_legend=1 00:28:26.674 --rc geninfo_all_blocks=1 00:28:26.674 --rc geninfo_unexecuted_blocks=1 00:28:26.674 00:28:26.674 ' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:26.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.674 --rc genhtml_branch_coverage=1 00:28:26.674 --rc genhtml_function_coverage=1 00:28:26.674 --rc genhtml_legend=1 00:28:26.674 --rc geninfo_all_blocks=1 00:28:26.674 --rc geninfo_unexecuted_blocks=1 00:28:26.674 00:28:26.674 ' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:26.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.674 --rc genhtml_branch_coverage=1 00:28:26.674 --rc genhtml_function_coverage=1 00:28:26.674 --rc genhtml_legend=1 00:28:26.674 --rc geninfo_all_blocks=1 00:28:26.674 --rc geninfo_unexecuted_blocks=1 00:28:26.674 00:28:26.674 ' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:26.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.674 --rc genhtml_branch_coverage=1 00:28:26.674 --rc genhtml_function_coverage=1 00:28:26.674 --rc genhtml_legend=1 00:28:26.674 --rc geninfo_all_blocks=1 00:28:26.674 --rc geninfo_unexecuted_blocks=1 00:28:26.674 00:28:26.674 ' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:26.674 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.674 ************************************ 00:28:26.674 START TEST nvmf_shutdown_tc1 00:28:26.674 ************************************ 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:26.674 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.675 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.675 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.675 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.675 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.675 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.675 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.675 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:26.675 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:26.675 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.675 05:45:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:36.650 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:36.650 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:36.651 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:36.651 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:36.651 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:36.651 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:36.651 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:36.651 altname enp217s0f0np0 00:28:36.651 altname ens818f0np0 00:28:36.651 inet 192.168.100.8/24 scope global mlx_0_0 00:28:36.651 valid_lft forever preferred_lft forever 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:36.651 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:36.651 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:36.651 altname enp217s0f1np1 00:28:36.651 altname ens818f1np1 00:28:36.651 inet 192.168.100.9/24 scope global mlx_0_1 00:28:36.651 valid_lft forever preferred_lft forever 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:36.651 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:36.652 192.168.100.9' 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:36.652 192.168.100.9' 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:36.652 192.168.100.9' 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3465983 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3465983 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3465983 ']' 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.652 05:45:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.652 [2024-11-27 05:45:31.878676] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:36.652 [2024-11-27 05:45:31.878780] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.652 [2024-11-27 05:45:32.031182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.652 [2024-11-27 05:45:32.128779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.652 [2024-11-27 05:45:32.128833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.652 [2024-11-27 05:45:32.128847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.652 [2024-11-27 05:45:32.128861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.652 [2024-11-27 05:45:32.128871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.652 [2024-11-27 05:45:32.131417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.652 [2024-11-27 05:45:32.131447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.652 [2024-11-27 05:45:32.131535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.652 [2024-11-27 05:45:32.131559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:36.652 05:45:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.652 05:45:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:36.652 05:45:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:36.652 05:45:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.652 05:45:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.652 05:45:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.652 05:45:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:36.652 05:45:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.652 05:45:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.652 [2024-11-27 05:45:32.764467] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7effd85a6940) succeed. 00:28:36.652 [2024-11-27 05:45:32.774573] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7effd8562940) succeed. 00:28:36.652 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.652 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:36.652 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:36.652 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.652 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.652 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:36.652 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.652 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.653 05:45:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.653 Malloc1 00:28:36.653 [2024-11-27 05:45:33.193751] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:36.911 Malloc2 00:28:36.911 Malloc3 00:28:36.911 Malloc4 00:28:37.169 Malloc5 00:28:37.169 Malloc6 00:28:37.169 Malloc7 00:28:37.427 Malloc8 00:28:37.427 Malloc9 00:28:37.427 Malloc10 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3466307 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3466307 /var/tmp/bdevperf.sock 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3466307 ']' 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:37.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.686 { 00:28:37.686 "params": { 00:28:37.686 "name": "Nvme$subsystem", 00:28:37.686 "trtype": "$TEST_TRANSPORT", 00:28:37.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.686 "adrfam": "ipv4", 00:28:37.686 "trsvcid": "$NVMF_PORT", 00:28:37.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.686 "hdgst": ${hdgst:-false}, 00:28:37.686 "ddgst": ${ddgst:-false} 00:28:37.686 }, 00:28:37.686 "method": "bdev_nvme_attach_controller" 00:28:37.686 } 00:28:37.686 EOF 00:28:37.686 )") 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.686 { 00:28:37.686 "params": { 00:28:37.686 "name": "Nvme$subsystem", 00:28:37.686 "trtype": "$TEST_TRANSPORT", 00:28:37.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.686 "adrfam": "ipv4", 00:28:37.686 "trsvcid": "$NVMF_PORT", 00:28:37.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.686 "hdgst": ${hdgst:-false}, 00:28:37.686 "ddgst": ${ddgst:-false} 00:28:37.686 }, 00:28:37.686 "method": "bdev_nvme_attach_controller" 00:28:37.686 } 00:28:37.686 EOF 00:28:37.686 )") 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.686 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.686 { 00:28:37.686 "params": { 00:28:37.686 "name": "Nvme$subsystem", 00:28:37.686 "trtype": "$TEST_TRANSPORT", 00:28:37.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.686 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "$NVMF_PORT", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.687 "hdgst": ${hdgst:-false}, 00:28:37.687 "ddgst": ${ddgst:-false} 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 } 00:28:37.687 EOF 00:28:37.687 )") 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.687 { 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme$subsystem", 00:28:37.687 "trtype": "$TEST_TRANSPORT", 00:28:37.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "$NVMF_PORT", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.687 "hdgst": ${hdgst:-false}, 00:28:37.687 "ddgst": ${ddgst:-false} 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 } 00:28:37.687 EOF 00:28:37.687 )") 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.687 { 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme$subsystem", 00:28:37.687 "trtype": "$TEST_TRANSPORT", 00:28:37.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "$NVMF_PORT", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.687 "hdgst": ${hdgst:-false}, 00:28:37.687 "ddgst": ${ddgst:-false} 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 } 00:28:37.687 EOF 00:28:37.687 )") 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.687 { 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme$subsystem", 00:28:37.687 "trtype": "$TEST_TRANSPORT", 00:28:37.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "$NVMF_PORT", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.687 "hdgst": ${hdgst:-false}, 00:28:37.687 "ddgst": ${ddgst:-false} 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 } 00:28:37.687 EOF 00:28:37.687 )") 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.687 { 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme$subsystem", 00:28:37.687 "trtype": "$TEST_TRANSPORT", 00:28:37.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "$NVMF_PORT", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.687 "hdgst": ${hdgst:-false}, 00:28:37.687 "ddgst": ${ddgst:-false} 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 } 00:28:37.687 EOF 00:28:37.687 )") 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.687 { 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme$subsystem", 00:28:37.687 "trtype": "$TEST_TRANSPORT", 00:28:37.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "$NVMF_PORT", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.687 "hdgst": ${hdgst:-false}, 00:28:37.687 "ddgst": ${ddgst:-false} 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 } 00:28:37.687 EOF 00:28:37.687 )") 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.687 { 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme$subsystem", 00:28:37.687 "trtype": "$TEST_TRANSPORT", 00:28:37.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "$NVMF_PORT", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.687 "hdgst": ${hdgst:-false}, 00:28:37.687 "ddgst": ${ddgst:-false} 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 } 00:28:37.687 EOF 00:28:37.687 )") 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.687 { 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme$subsystem", 00:28:37.687 "trtype": "$TEST_TRANSPORT", 00:28:37.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "$NVMF_PORT", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.687 "hdgst": ${hdgst:-false}, 00:28:37.687 "ddgst": ${ddgst:-false} 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 } 00:28:37.687 EOF 00:28:37.687 )") 00:28:37.687 [2024-11-27 05:45:34.144476] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:37.687 [2024-11-27 05:45:34.144566] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:37.687 05:45:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme1", 00:28:37.687 "trtype": "rdma", 00:28:37.687 "traddr": "192.168.100.8", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "4420", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.687 "hdgst": false, 00:28:37.687 "ddgst": false 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 },{ 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme2", 00:28:37.687 "trtype": "rdma", 00:28:37.687 "traddr": "192.168.100.8", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "4420", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:37.687 "hdgst": false, 00:28:37.687 "ddgst": false 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 },{ 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme3", 00:28:37.687 "trtype": "rdma", 00:28:37.687 "traddr": "192.168.100.8", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "4420", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:37.687 "hdgst": false, 00:28:37.687 "ddgst": false 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 },{ 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme4", 00:28:37.687 "trtype": "rdma", 00:28:37.687 "traddr": "192.168.100.8", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "4420", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:37.687 "hdgst": false, 00:28:37.687 "ddgst": false 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 },{ 00:28:37.687 "params": { 00:28:37.687 "name": "Nvme5", 00:28:37.687 "trtype": "rdma", 00:28:37.687 "traddr": "192.168.100.8", 00:28:37.687 "adrfam": "ipv4", 00:28:37.687 "trsvcid": "4420", 00:28:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:37.687 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:37.687 "hdgst": false, 00:28:37.687 "ddgst": false 00:28:37.687 }, 00:28:37.687 "method": "bdev_nvme_attach_controller" 00:28:37.687 },{ 00:28:37.687 "params": { 00:28:37.688 "name": "Nvme6", 00:28:37.688 "trtype": "rdma", 00:28:37.688 "traddr": "192.168.100.8", 00:28:37.688 "adrfam": "ipv4", 00:28:37.688 "trsvcid": "4420", 00:28:37.688 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:37.688 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:37.688 "hdgst": false, 00:28:37.688 "ddgst": false 00:28:37.688 }, 00:28:37.688 "method": "bdev_nvme_attach_controller" 00:28:37.688 },{ 00:28:37.688 "params": { 00:28:37.688 "name": "Nvme7", 00:28:37.688 "trtype": "rdma", 00:28:37.688 "traddr": "192.168.100.8", 00:28:37.688 "adrfam": "ipv4", 00:28:37.688 "trsvcid": "4420", 00:28:37.688 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:37.688 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:37.688 "hdgst": false, 00:28:37.688 "ddgst": false 00:28:37.688 }, 00:28:37.688 "method": "bdev_nvme_attach_controller" 00:28:37.688 },{ 00:28:37.688 "params": { 00:28:37.688 "name": "Nvme8", 00:28:37.688 "trtype": "rdma", 00:28:37.688 "traddr": "192.168.100.8", 00:28:37.688 "adrfam": "ipv4", 00:28:37.688 "trsvcid": "4420", 00:28:37.688 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:37.688 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:37.688 "hdgst": false, 00:28:37.688 "ddgst": false 00:28:37.688 }, 00:28:37.688 "method": "bdev_nvme_attach_controller" 00:28:37.688 },{ 00:28:37.688 "params": { 00:28:37.688 "name": "Nvme9", 00:28:37.688 "trtype": "rdma", 00:28:37.688 "traddr": "192.168.100.8", 00:28:37.688 "adrfam": "ipv4", 00:28:37.688 "trsvcid": "4420", 00:28:37.688 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:37.688 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:37.688 "hdgst": false, 00:28:37.688 "ddgst": false 00:28:37.688 }, 00:28:37.688 "method": "bdev_nvme_attach_controller" 00:28:37.688 },{ 00:28:37.688 "params": { 00:28:37.688 "name": "Nvme10", 00:28:37.688 "trtype": "rdma", 00:28:37.688 "traddr": "192.168.100.8", 00:28:37.688 "adrfam": "ipv4", 00:28:37.688 "trsvcid": "4420", 00:28:37.688 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:37.688 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:37.688 "hdgst": false, 00:28:37.688 "ddgst": false 00:28:37.688 }, 00:28:37.688 "method": "bdev_nvme_attach_controller" 00:28:37.688 }' 00:28:37.946 [2024-11-27 05:45:34.303410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.946 [2024-11-27 05:45:34.409514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.320 05:45:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.320 05:45:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:39.320 05:45:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:39.320 05:45:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.320 05:45:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:39.320 05:45:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.320 05:45:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3466307 00:28:39.320 05:45:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:39.320 05:45:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:40.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3466307 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3465983 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.254 { 00:28:40.254 "params": { 00:28:40.254 "name": "Nvme$subsystem", 00:28:40.254 "trtype": "$TEST_TRANSPORT", 00:28:40.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.254 "adrfam": "ipv4", 00:28:40.254 "trsvcid": "$NVMF_PORT", 00:28:40.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.254 "hdgst": ${hdgst:-false}, 00:28:40.254 "ddgst": ${ddgst:-false} 00:28:40.254 }, 00:28:40.254 "method": "bdev_nvme_attach_controller" 00:28:40.254 } 00:28:40.254 EOF 00:28:40.254 )") 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.254 { 00:28:40.254 "params": { 00:28:40.254 "name": "Nvme$subsystem", 00:28:40.254 "trtype": "$TEST_TRANSPORT", 00:28:40.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.254 "adrfam": "ipv4", 00:28:40.254 "trsvcid": "$NVMF_PORT", 00:28:40.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.254 "hdgst": ${hdgst:-false}, 00:28:40.254 "ddgst": ${ddgst:-false} 00:28:40.254 }, 00:28:40.254 "method": "bdev_nvme_attach_controller" 00:28:40.254 } 00:28:40.254 EOF 00:28:40.254 )") 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.254 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.254 { 00:28:40.254 "params": { 00:28:40.254 "name": "Nvme$subsystem", 00:28:40.254 "trtype": "$TEST_TRANSPORT", 00:28:40.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.254 "adrfam": "ipv4", 00:28:40.254 "trsvcid": "$NVMF_PORT", 00:28:40.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.254 "hdgst": ${hdgst:-false}, 00:28:40.254 "ddgst": ${ddgst:-false} 00:28:40.254 }, 00:28:40.254 "method": "bdev_nvme_attach_controller" 00:28:40.255 } 00:28:40.255 EOF 00:28:40.255 )") 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.255 { 00:28:40.255 "params": { 00:28:40.255 "name": "Nvme$subsystem", 00:28:40.255 "trtype": "$TEST_TRANSPORT", 00:28:40.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.255 "adrfam": "ipv4", 00:28:40.255 "trsvcid": "$NVMF_PORT", 00:28:40.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.255 "hdgst": ${hdgst:-false}, 00:28:40.255 "ddgst": ${ddgst:-false} 00:28:40.255 }, 00:28:40.255 "method": "bdev_nvme_attach_controller" 00:28:40.255 } 00:28:40.255 EOF 00:28:40.255 )") 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.255 { 00:28:40.255 "params": { 00:28:40.255 "name": "Nvme$subsystem", 00:28:40.255 "trtype": "$TEST_TRANSPORT", 00:28:40.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.255 "adrfam": "ipv4", 00:28:40.255 "trsvcid": "$NVMF_PORT", 00:28:40.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.255 "hdgst": ${hdgst:-false}, 00:28:40.255 "ddgst": ${ddgst:-false} 00:28:40.255 }, 00:28:40.255 "method": "bdev_nvme_attach_controller" 00:28:40.255 } 00:28:40.255 EOF 00:28:40.255 )") 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.255 { 00:28:40.255 "params": { 00:28:40.255 "name": "Nvme$subsystem", 00:28:40.255 "trtype": "$TEST_TRANSPORT", 00:28:40.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.255 "adrfam": "ipv4", 00:28:40.255 "trsvcid": "$NVMF_PORT", 00:28:40.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.255 "hdgst": ${hdgst:-false}, 00:28:40.255 "ddgst": ${ddgst:-false} 00:28:40.255 }, 00:28:40.255 "method": "bdev_nvme_attach_controller" 00:28:40.255 } 00:28:40.255 EOF 00:28:40.255 )") 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.255 { 00:28:40.255 "params": { 00:28:40.255 "name": "Nvme$subsystem", 00:28:40.255 "trtype": "$TEST_TRANSPORT", 00:28:40.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.255 "adrfam": "ipv4", 00:28:40.255 "trsvcid": "$NVMF_PORT", 00:28:40.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.255 "hdgst": ${hdgst:-false}, 00:28:40.255 "ddgst": ${ddgst:-false} 00:28:40.255 }, 00:28:40.255 "method": "bdev_nvme_attach_controller" 00:28:40.255 } 00:28:40.255 EOF 00:28:40.255 )") 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.255 { 00:28:40.255 "params": { 00:28:40.255 "name": "Nvme$subsystem", 00:28:40.255 "trtype": "$TEST_TRANSPORT", 00:28:40.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.255 "adrfam": "ipv4", 00:28:40.255 "trsvcid": "$NVMF_PORT", 00:28:40.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.255 "hdgst": ${hdgst:-false}, 00:28:40.255 "ddgst": ${ddgst:-false} 00:28:40.255 }, 00:28:40.255 "method": "bdev_nvme_attach_controller" 00:28:40.255 } 00:28:40.255 EOF 00:28:40.255 )") 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.255 { 00:28:40.255 "params": { 00:28:40.255 "name": "Nvme$subsystem", 00:28:40.255 "trtype": "$TEST_TRANSPORT", 00:28:40.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.255 "adrfam": "ipv4", 00:28:40.255 "trsvcid": "$NVMF_PORT", 00:28:40.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.255 "hdgst": ${hdgst:-false}, 00:28:40.255 "ddgst": ${ddgst:-false} 00:28:40.255 }, 00:28:40.255 "method": "bdev_nvme_attach_controller" 00:28:40.255 } 00:28:40.255 EOF 00:28:40.255 )") 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.255 { 00:28:40.255 "params": { 00:28:40.255 "name": "Nvme$subsystem", 00:28:40.255 "trtype": "$TEST_TRANSPORT", 00:28:40.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.255 "adrfam": "ipv4", 00:28:40.255 "trsvcid": "$NVMF_PORT", 00:28:40.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.255 "hdgst": ${hdgst:-false}, 00:28:40.255 "ddgst": ${ddgst:-false} 00:28:40.255 }, 00:28:40.255 "method": "bdev_nvme_attach_controller" 00:28:40.255 } 00:28:40.255 EOF 00:28:40.255 )") 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.255 [2024-11-27 05:45:36.589812] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:40.255 [2024-11-27 05:45:36.589908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466795 ] 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:40.255 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:40.256 05:45:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:40.256 "params": { 00:28:40.256 "name": "Nvme1", 00:28:40.256 "trtype": "rdma", 00:28:40.256 "traddr": "192.168.100.8", 00:28:40.256 "adrfam": "ipv4", 00:28:40.256 "trsvcid": "4420", 00:28:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.256 "hdgst": false, 00:28:40.256 "ddgst": false 00:28:40.256 }, 00:28:40.256 "method": "bdev_nvme_attach_controller" 00:28:40.256 },{ 00:28:40.256 "params": { 00:28:40.256 "name": "Nvme2", 00:28:40.256 "trtype": "rdma", 00:28:40.256 "traddr": "192.168.100.8", 00:28:40.256 "adrfam": "ipv4", 00:28:40.256 "trsvcid": "4420", 00:28:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:40.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:40.256 "hdgst": false, 00:28:40.256 "ddgst": false 00:28:40.256 }, 00:28:40.256 "method": "bdev_nvme_attach_controller" 00:28:40.256 },{ 00:28:40.256 "params": { 00:28:40.256 "name": "Nvme3", 00:28:40.256 "trtype": "rdma", 00:28:40.256 "traddr": "192.168.100.8", 00:28:40.256 "adrfam": "ipv4", 00:28:40.256 "trsvcid": "4420", 00:28:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:40.256 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:40.256 "hdgst": false, 00:28:40.256 "ddgst": false 00:28:40.256 }, 00:28:40.256 "method": "bdev_nvme_attach_controller" 00:28:40.256 },{ 00:28:40.256 "params": { 00:28:40.256 "name": "Nvme4", 00:28:40.256 "trtype": "rdma", 00:28:40.256 "traddr": "192.168.100.8", 00:28:40.256 "adrfam": "ipv4", 00:28:40.256 "trsvcid": "4420", 00:28:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:40.256 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:40.256 "hdgst": false, 00:28:40.256 "ddgst": false 00:28:40.256 }, 00:28:40.256 "method": "bdev_nvme_attach_controller" 00:28:40.256 },{ 00:28:40.256 "params": { 00:28:40.256 "name": "Nvme5", 00:28:40.256 "trtype": "rdma", 00:28:40.256 "traddr": "192.168.100.8", 00:28:40.256 "adrfam": "ipv4", 00:28:40.256 "trsvcid": "4420", 00:28:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:40.256 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:40.256 "hdgst": false, 00:28:40.256 "ddgst": false 00:28:40.256 }, 00:28:40.256 "method": "bdev_nvme_attach_controller" 00:28:40.256 },{ 00:28:40.256 "params": { 00:28:40.256 "name": "Nvme6", 00:28:40.256 "trtype": "rdma", 00:28:40.256 "traddr": "192.168.100.8", 00:28:40.256 "adrfam": "ipv4", 00:28:40.256 "trsvcid": "4420", 00:28:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:40.256 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:40.256 "hdgst": false, 00:28:40.256 "ddgst": false 00:28:40.256 }, 00:28:40.256 "method": "bdev_nvme_attach_controller" 00:28:40.256 },{ 00:28:40.256 "params": { 00:28:40.256 "name": "Nvme7", 00:28:40.256 "trtype": "rdma", 00:28:40.256 "traddr": "192.168.100.8", 00:28:40.256 "adrfam": "ipv4", 00:28:40.256 "trsvcid": "4420", 00:28:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:40.256 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:40.256 "hdgst": false, 00:28:40.256 "ddgst": false 00:28:40.256 }, 00:28:40.256 "method": "bdev_nvme_attach_controller" 00:28:40.256 },{ 00:28:40.256 "params": { 00:28:40.256 "name": "Nvme8", 00:28:40.256 "trtype": "rdma", 00:28:40.256 "traddr": "192.168.100.8", 00:28:40.256 "adrfam": "ipv4", 00:28:40.256 "trsvcid": "4420", 00:28:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:40.256 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:40.256 "hdgst": false, 00:28:40.256 "ddgst": false 00:28:40.256 }, 00:28:40.256 "method": "bdev_nvme_attach_controller" 00:28:40.256 },{ 00:28:40.256 "params": { 00:28:40.256 "name": "Nvme9", 00:28:40.256 "trtype": "rdma", 00:28:40.256 "traddr": "192.168.100.8", 00:28:40.256 "adrfam": "ipv4", 00:28:40.256 "trsvcid": "4420", 00:28:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:40.256 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:40.256 "hdgst": false, 00:28:40.256 "ddgst": false 00:28:40.256 }, 00:28:40.256 "method": "bdev_nvme_attach_controller" 00:28:40.256 },{ 00:28:40.256 "params": { 00:28:40.256 "name": "Nvme10", 00:28:40.256 "trtype": "rdma", 00:28:40.256 "traddr": "192.168.100.8", 00:28:40.256 "adrfam": "ipv4", 00:28:40.256 "trsvcid": "4420", 00:28:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:40.256 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:40.256 "hdgst": false, 00:28:40.256 "ddgst": false 00:28:40.256 }, 00:28:40.256 "method": "bdev_nvme_attach_controller" 00:28:40.256 }' 00:28:40.256 [2024-11-27 05:45:36.748426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.514 [2024-11-27 05:45:36.858949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.448 Running I/O for 1 seconds... 00:28:42.821 2845.00 IOPS, 177.81 MiB/s 00:28:42.821 Latency(us) 00:28:42.822 [2024-11-27T04:45:39.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.822 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.822 Verification LBA range: start 0x0 length 0x400 00:28:42.822 Nvme1n1 : 1.21 336.91 21.06 0.00 0.00 186540.51 11219.76 266757.73 00:28:42.822 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.822 Verification LBA range: start 0x0 length 0x400 00:28:42.822 Nvme2n1 : 1.21 344.77 21.55 0.00 0.00 179725.11 6265.24 184549.38 00:28:42.822 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.822 Verification LBA range: start 0x0 length 0x400 00:28:42.822 Nvme3n1 : 1.21 342.70 21.42 0.00 0.00 178310.85 11639.19 176999.63 00:28:42.822 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.822 Verification LBA range: start 0x0 length 0x400 00:28:42.822 Nvme4n1 : 1.22 346.43 21.65 0.00 0.00 173874.76 6055.53 169449.88 00:28:42.822 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.822 Verification LBA range: start 0x0 length 0x400 00:28:42.822 Nvme5n1 : 1.22 328.86 20.55 0.00 0.00 180203.81 11586.76 158544.69 00:28:42.822 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.822 Verification LBA range: start 0x0 length 0x400 00:28:42.822 Nvme6n1 : 1.22 349.75 21.86 0.00 0.00 167707.42 11901.34 153511.53 00:28:42.822 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.822 Verification LBA range: start 0x0 length 0x400 00:28:42.822 Nvme7n1 : 1.22 353.34 22.08 0.00 0.00 163686.15 5505.02 145961.78 00:28:42.822 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.822 Verification LBA range: start 0x0 length 0x400 00:28:42.822 Nvme8n1 : 1.22 344.74 21.55 0.00 0.00 164854.55 5976.88 142606.34 00:28:42.822 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.822 Verification LBA range: start 0x0 length 0x400 00:28:42.822 Nvme9n1 : 1.22 327.21 20.45 0.00 0.00 170571.20 12949.91 131701.15 00:28:42.822 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.822 Verification LBA range: start 0x0 length 0x400 00:28:42.822 Nvme10n1 : 1.22 313.56 19.60 0.00 0.00 176672.56 12163.48 201326.59 00:28:42.822 [2024-11-27T04:45:39.409Z] =================================================================================================================== 00:28:42.822 [2024-11-27T04:45:39.409Z] Total : 3388.26 211.77 0.00 0.00 174099.48 5505.02 266757.73 00:28:44.196 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:44.196 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:44.196 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:44.196 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:44.196 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:44.196 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:44.196 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:44.196 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:44.196 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:44.197 rmmod nvme_rdma 00:28:44.197 rmmod nvme_fabrics 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3465983 ']' 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3465983 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3465983 ']' 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3465983 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3465983 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3465983' 00:28:44.197 killing process with pid 3465983 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3465983 00:28:44.197 05:45:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3465983 00:28:47.477 05:45:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.477 05:45:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:47.477 00:28:47.477 real 0m20.880s 00:28:47.477 user 0m52.212s 00:28:47.477 sys 0m8.335s 00:28:47.477 05:45:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.477 05:45:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.477 ************************************ 00:28:47.477 END TEST nvmf_shutdown_tc1 00:28:47.477 ************************************ 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:47.477 ************************************ 00:28:47.477 START TEST nvmf_shutdown_tc2 00:28:47.477 ************************************ 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.477 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:47.478 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:47.478 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:47.478 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:47.736 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.736 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:47.737 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:47.737 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:47.737 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:47.737 altname enp217s0f0np0 00:28:47.737 altname ens818f0np0 00:28:47.737 inet 192.168.100.8/24 scope global mlx_0_0 00:28:47.737 valid_lft forever preferred_lft forever 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:47.737 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:47.737 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:47.737 altname enp217s0f1np1 00:28:47.737 altname ens818f1np1 00:28:47.737 inet 192.168.100.9/24 scope global mlx_0_1 00:28:47.737 valid_lft forever preferred_lft forever 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:47.737 192.168.100.9' 00:28:47.737 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:47.738 192.168.100.9' 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:47.738 192.168.100.9' 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3468078 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3468078 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3468078 ']' 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.738 05:45:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.995 [2024-11-27 05:45:44.391029] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:47.995 [2024-11-27 05:45:44.391129] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.995 [2024-11-27 05:45:44.546879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.252 [2024-11-27 05:45:44.648728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.252 [2024-11-27 05:45:44.648777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.252 [2024-11-27 05:45:44.648790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.252 [2024-11-27 05:45:44.648802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.252 [2024-11-27 05:45:44.648812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.252 [2024-11-27 05:45:44.651361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.252 [2024-11-27 05:45:44.651392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.252 [2024-11-27 05:45:44.651498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.252 [2024-11-27 05:45:44.651523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:48.817 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.817 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:48.817 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.817 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:48.817 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.817 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.817 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:48.817 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.817 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.817 [2024-11-27 05:45:45.289409] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f00763a6940) succeed. 00:28:48.817 [2024-11-27 05:45:45.298887] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f0076362940) succeed. 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.113 05:45:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:49.403 Malloc1 00:28:49.403 [2024-11-27 05:45:45.711843] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:49.403 Malloc2 00:28:49.403 Malloc3 00:28:49.403 Malloc4 00:28:49.660 Malloc5 00:28:49.660 Malloc6 00:28:49.918 Malloc7 00:28:49.918 Malloc8 00:28:49.918 Malloc9 00:28:50.177 Malloc10 00:28:50.177 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.177 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:50.177 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3468613 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3468613 /var/tmp/bdevperf.sock 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3468613 ']' 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:50.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.178 { 00:28:50.178 "params": { 00:28:50.178 "name": "Nvme$subsystem", 00:28:50.178 "trtype": "$TEST_TRANSPORT", 00:28:50.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.178 "adrfam": "ipv4", 00:28:50.178 "trsvcid": "$NVMF_PORT", 00:28:50.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.178 "hdgst": ${hdgst:-false}, 00:28:50.178 "ddgst": ${ddgst:-false} 00:28:50.178 }, 00:28:50.178 "method": "bdev_nvme_attach_controller" 00:28:50.178 } 00:28:50.178 EOF 00:28:50.178 )") 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.178 { 00:28:50.178 "params": { 00:28:50.178 "name": "Nvme$subsystem", 00:28:50.178 "trtype": "$TEST_TRANSPORT", 00:28:50.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.178 "adrfam": "ipv4", 00:28:50.178 "trsvcid": "$NVMF_PORT", 00:28:50.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.178 "hdgst": ${hdgst:-false}, 00:28:50.178 "ddgst": ${ddgst:-false} 00:28:50.178 }, 00:28:50.178 "method": "bdev_nvme_attach_controller" 00:28:50.178 } 00:28:50.178 EOF 00:28:50.178 )") 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.178 { 00:28:50.178 "params": { 00:28:50.178 "name": "Nvme$subsystem", 00:28:50.178 "trtype": "$TEST_TRANSPORT", 00:28:50.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.178 "adrfam": "ipv4", 00:28:50.178 "trsvcid": "$NVMF_PORT", 00:28:50.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.178 "hdgst": ${hdgst:-false}, 00:28:50.178 "ddgst": ${ddgst:-false} 00:28:50.178 }, 00:28:50.178 "method": "bdev_nvme_attach_controller" 00:28:50.178 } 00:28:50.178 EOF 00:28:50.178 )") 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.178 { 00:28:50.178 "params": { 00:28:50.178 "name": "Nvme$subsystem", 00:28:50.178 "trtype": "$TEST_TRANSPORT", 00:28:50.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.178 "adrfam": "ipv4", 00:28:50.178 "trsvcid": "$NVMF_PORT", 00:28:50.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.178 "hdgst": ${hdgst:-false}, 00:28:50.178 "ddgst": ${ddgst:-false} 00:28:50.178 }, 00:28:50.178 "method": "bdev_nvme_attach_controller" 00:28:50.178 } 00:28:50.178 EOF 00:28:50.178 )") 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.178 { 00:28:50.178 "params": { 00:28:50.178 "name": "Nvme$subsystem", 00:28:50.178 "trtype": "$TEST_TRANSPORT", 00:28:50.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.178 "adrfam": "ipv4", 00:28:50.178 "trsvcid": "$NVMF_PORT", 00:28:50.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.178 "hdgst": ${hdgst:-false}, 00:28:50.178 "ddgst": ${ddgst:-false} 00:28:50.178 }, 00:28:50.178 "method": "bdev_nvme_attach_controller" 00:28:50.178 } 00:28:50.178 EOF 00:28:50.178 )") 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.178 { 00:28:50.178 "params": { 00:28:50.178 "name": "Nvme$subsystem", 00:28:50.178 "trtype": "$TEST_TRANSPORT", 00:28:50.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.178 "adrfam": "ipv4", 00:28:50.178 "trsvcid": "$NVMF_PORT", 00:28:50.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.178 "hdgst": ${hdgst:-false}, 00:28:50.178 "ddgst": ${ddgst:-false} 00:28:50.178 }, 00:28:50.178 "method": "bdev_nvme_attach_controller" 00:28:50.178 } 00:28:50.178 EOF 00:28:50.178 )") 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.178 { 00:28:50.178 "params": { 00:28:50.178 "name": "Nvme$subsystem", 00:28:50.178 "trtype": "$TEST_TRANSPORT", 00:28:50.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.178 "adrfam": "ipv4", 00:28:50.178 "trsvcid": "$NVMF_PORT", 00:28:50.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.178 "hdgst": ${hdgst:-false}, 00:28:50.178 "ddgst": ${ddgst:-false} 00:28:50.178 }, 00:28:50.178 "method": "bdev_nvme_attach_controller" 00:28:50.178 } 00:28:50.178 EOF 00:28:50.178 )") 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.178 { 00:28:50.178 "params": { 00:28:50.178 "name": "Nvme$subsystem", 00:28:50.178 "trtype": "$TEST_TRANSPORT", 00:28:50.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.178 "adrfam": "ipv4", 00:28:50.178 "trsvcid": "$NVMF_PORT", 00:28:50.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.178 "hdgst": ${hdgst:-false}, 00:28:50.178 "ddgst": ${ddgst:-false} 00:28:50.178 }, 00:28:50.178 "method": "bdev_nvme_attach_controller" 00:28:50.178 } 00:28:50.178 EOF 00:28:50.178 )") 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.178 { 00:28:50.178 "params": { 00:28:50.178 "name": "Nvme$subsystem", 00:28:50.178 "trtype": "$TEST_TRANSPORT", 00:28:50.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.178 "adrfam": "ipv4", 00:28:50.178 "trsvcid": "$NVMF_PORT", 00:28:50.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.178 "hdgst": ${hdgst:-false}, 00:28:50.178 "ddgst": ${ddgst:-false} 00:28:50.178 }, 00:28:50.178 "method": "bdev_nvme_attach_controller" 00:28:50.178 } 00:28:50.178 EOF 00:28:50.178 )") 00:28:50.178 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:50.179 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.179 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.179 { 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme$subsystem", 00:28:50.179 "trtype": "$TEST_TRANSPORT", 00:28:50.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "$NVMF_PORT", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.179 "hdgst": ${hdgst:-false}, 00:28:50.179 "ddgst": ${ddgst:-false} 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 } 00:28:50.179 EOF 00:28:50.179 )") 00:28:50.179 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:50.179 [2024-11-27 05:45:46.686196] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:50.179 [2024-11-27 05:45:46.686282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468613 ] 00:28:50.179 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:50.179 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:50.179 05:45:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme1", 00:28:50.179 "trtype": "rdma", 00:28:50.179 "traddr": "192.168.100.8", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "4420", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:50.179 "hdgst": false, 00:28:50.179 "ddgst": false 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 },{ 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme2", 00:28:50.179 "trtype": "rdma", 00:28:50.179 "traddr": "192.168.100.8", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "4420", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:50.179 "hdgst": false, 00:28:50.179 "ddgst": false 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 },{ 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme3", 00:28:50.179 "trtype": "rdma", 00:28:50.179 "traddr": "192.168.100.8", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "4420", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:50.179 "hdgst": false, 00:28:50.179 "ddgst": false 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 },{ 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme4", 00:28:50.179 "trtype": "rdma", 00:28:50.179 "traddr": "192.168.100.8", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "4420", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:50.179 "hdgst": false, 00:28:50.179 "ddgst": false 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 },{ 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme5", 00:28:50.179 "trtype": "rdma", 00:28:50.179 "traddr": "192.168.100.8", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "4420", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:50.179 "hdgst": false, 00:28:50.179 "ddgst": false 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 },{ 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme6", 00:28:50.179 "trtype": "rdma", 00:28:50.179 "traddr": "192.168.100.8", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "4420", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:50.179 "hdgst": false, 00:28:50.179 "ddgst": false 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 },{ 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme7", 00:28:50.179 "trtype": "rdma", 00:28:50.179 "traddr": "192.168.100.8", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "4420", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:50.179 "hdgst": false, 00:28:50.179 "ddgst": false 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 },{ 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme8", 00:28:50.179 "trtype": "rdma", 00:28:50.179 "traddr": "192.168.100.8", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "4420", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:50.179 "hdgst": false, 00:28:50.179 "ddgst": false 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 },{ 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme9", 00:28:50.179 "trtype": "rdma", 00:28:50.179 "traddr": "192.168.100.8", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "4420", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:50.179 "hdgst": false, 00:28:50.179 "ddgst": false 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 },{ 00:28:50.179 "params": { 00:28:50.179 "name": "Nvme10", 00:28:50.179 "trtype": "rdma", 00:28:50.179 "traddr": "192.168.100.8", 00:28:50.179 "adrfam": "ipv4", 00:28:50.179 "trsvcid": "4420", 00:28:50.179 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:50.179 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:50.179 "hdgst": false, 00:28:50.179 "ddgst": false 00:28:50.179 }, 00:28:50.179 "method": "bdev_nvme_attach_controller" 00:28:50.179 }' 00:28:50.437 [2024-11-27 05:45:46.841888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.437 [2024-11-27 05:45:46.951150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.810 Running I/O for 10 seconds... 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:51.810 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:52.068 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:52.068 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:52.068 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:52.068 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:52.068 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.068 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=150 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 150 -ge 100 ']' 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3468613 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3468613 ']' 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3468613 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3468613 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3468613' 00:28:52.327 killing process with pid 3468613 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3468613 00:28:52.327 05:45:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3468613 00:28:52.584 Received shutdown signal, test time was about 0.861553 seconds 00:28:52.584 00:28:52.584 Latency(us) 00:28:52.584 [2024-11-27T04:45:49.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.584 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.584 Verification LBA range: start 0x0 length 0x400 00:28:52.584 Nvme1n1 : 0.85 325.07 20.32 0.00 0.00 191627.74 10905.19 198810.01 00:28:52.584 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.584 Verification LBA range: start 0x0 length 0x400 00:28:52.584 Nvme2n1 : 0.85 324.62 20.29 0.00 0.00 187852.49 10695.48 184549.38 00:28:52.584 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.584 Verification LBA range: start 0x0 length 0x400 00:28:52.584 Nvme3n1 : 0.85 347.62 21.73 0.00 0.00 172404.24 5452.60 173644.19 00:28:52.584 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.584 Verification LBA range: start 0x0 length 0x400 00:28:52.584 Nvme4n1 : 0.85 357.52 22.34 0.00 0.00 164542.44 7497.32 166094.44 00:28:52.584 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.584 Verification LBA range: start 0x0 length 0x400 00:28:52.584 Nvme5n1 : 0.85 326.48 20.40 0.00 0.00 175730.96 11272.19 163577.86 00:28:52.584 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.584 Verification LBA range: start 0x0 length 0x400 00:28:52.584 Nvme6n1 : 0.85 341.12 21.32 0.00 0.00 165156.22 11481.91 157705.83 00:28:52.584 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.584 Verification LBA range: start 0x0 length 0x400 00:28:52.584 Nvme7n1 : 0.85 361.48 22.59 0.00 0.00 152845.99 6710.89 141767.48 00:28:52.584 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.585 Verification LBA range: start 0x0 length 0x400 00:28:52.585 Nvme8n1 : 0.86 373.47 23.34 0.00 0.00 145096.70 9542.04 112407.35 00:28:52.585 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.585 Verification LBA range: start 0x0 length 0x400 00:28:52.585 Nvme9n1 : 0.86 337.86 21.12 0.00 0.00 155852.02 13421.77 145961.78 00:28:52.585 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.585 Verification LBA range: start 0x0 length 0x400 00:28:52.585 Nvme10n1 : 0.86 297.47 18.59 0.00 0.00 174210.97 12111.05 208037.48 00:28:52.585 [2024-11-27T04:45:49.172Z] =================================================================================================================== 00:28:52.585 [2024-11-27T04:45:49.172Z] Total : 3392.71 212.04 0.00 0.00 167855.63 5452.60 208037.48 00:28:53.516 05:45:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3468078 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:54.889 rmmod nvme_rdma 00:28:54.889 rmmod nvme_fabrics 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:54.889 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3468078 ']' 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3468078 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3468078 ']' 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3468078 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3468078 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3468078' 00:28:54.890 killing process with pid 3468078 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3468078 00:28:54.890 05:45:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3468078 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:58.169 00:28:58.169 real 0m10.613s 00:28:58.169 user 0m41.106s 00:28:58.169 sys 0m1.695s 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.169 ************************************ 00:28:58.169 END TEST nvmf_shutdown_tc2 00:28:58.169 ************************************ 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:58.169 ************************************ 00:28:58.169 START TEST nvmf_shutdown_tc3 00:28:58.169 ************************************ 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.169 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.170 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:58.429 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:58.429 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:58.429 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.429 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:58.430 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:58.430 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:58.430 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:58.430 altname enp217s0f0np0 00:28:58.430 altname ens818f0np0 00:28:58.430 inet 192.168.100.8/24 scope global mlx_0_0 00:28:58.430 valid_lft forever preferred_lft forever 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:58.430 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:58.430 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:58.430 altname enp217s0f1np1 00:28:58.430 altname ens818f1np1 00:28:58.430 inet 192.168.100.9/24 scope global mlx_0_1 00:28:58.430 valid_lft forever preferred_lft forever 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:58.430 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:58.430 192.168.100.9' 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:58.431 192.168.100.9' 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:58.431 192.168.100.9' 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3470067 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3470067 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3470067 ']' 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.431 05:45:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.689 [2024-11-27 05:45:55.053668] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:58.689 [2024-11-27 05:45:55.053764] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.689 [2024-11-27 05:45:55.208292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.948 [2024-11-27 05:45:55.308480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.948 [2024-11-27 05:45:55.308525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.948 [2024-11-27 05:45:55.308540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.948 [2024-11-27 05:45:55.308552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.948 [2024-11-27 05:45:55.308561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.948 [2024-11-27 05:45:55.310947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.948 [2024-11-27 05:45:55.310978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.948 [2024-11-27 05:45:55.311096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.949 [2024-11-27 05:45:55.311120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:59.516 05:45:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.516 05:45:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:59.516 05:45:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.516 05:45:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.516 05:45:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.516 05:45:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.516 05:45:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:59.516 05:45:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.516 05:45:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.516 [2024-11-27 05:45:55.971329] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f095851d940) succeed. 00:28:59.516 [2024-11-27 05:45:55.981348] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f09583bd940) succeed. 00:28:59.774 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.774 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:59.774 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:59.774 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.774 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.774 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.775 05:45:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.033 Malloc1 00:29:00.033 [2024-11-27 05:45:56.400183] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:00.033 Malloc2 00:29:00.033 Malloc3 00:29:00.292 Malloc4 00:29:00.292 Malloc5 00:29:00.292 Malloc6 00:29:00.550 Malloc7 00:29:00.550 Malloc8 00:29:00.550 Malloc9 00:29:00.810 Malloc10 00:29:00.810 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.810 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:00.810 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.810 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.810 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3470408 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3470408 /var/tmp/bdevperf.sock 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3470408 ']' 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:00.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.811 { 00:29:00.811 "params": { 00:29:00.811 "name": "Nvme$subsystem", 00:29:00.811 "trtype": "$TEST_TRANSPORT", 00:29:00.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.811 "adrfam": "ipv4", 00:29:00.811 "trsvcid": "$NVMF_PORT", 00:29:00.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.811 "hdgst": ${hdgst:-false}, 00:29:00.811 "ddgst": ${ddgst:-false} 00:29:00.811 }, 00:29:00.811 "method": "bdev_nvme_attach_controller" 00:29:00.811 } 00:29:00.811 EOF 00:29:00.811 )") 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.811 { 00:29:00.811 "params": { 00:29:00.811 "name": "Nvme$subsystem", 00:29:00.811 "trtype": "$TEST_TRANSPORT", 00:29:00.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.811 "adrfam": "ipv4", 00:29:00.811 "trsvcid": "$NVMF_PORT", 00:29:00.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.811 "hdgst": ${hdgst:-false}, 00:29:00.811 "ddgst": ${ddgst:-false} 00:29:00.811 }, 00:29:00.811 "method": "bdev_nvme_attach_controller" 00:29:00.811 } 00:29:00.811 EOF 00:29:00.811 )") 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.811 { 00:29:00.811 "params": { 00:29:00.811 "name": "Nvme$subsystem", 00:29:00.811 "trtype": "$TEST_TRANSPORT", 00:29:00.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.811 "adrfam": "ipv4", 00:29:00.811 "trsvcid": "$NVMF_PORT", 00:29:00.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.811 "hdgst": ${hdgst:-false}, 00:29:00.811 "ddgst": ${ddgst:-false} 00:29:00.811 }, 00:29:00.811 "method": "bdev_nvme_attach_controller" 00:29:00.811 } 00:29:00.811 EOF 00:29:00.811 )") 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.811 { 00:29:00.811 "params": { 00:29:00.811 "name": "Nvme$subsystem", 00:29:00.811 "trtype": "$TEST_TRANSPORT", 00:29:00.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.811 "adrfam": "ipv4", 00:29:00.811 "trsvcid": "$NVMF_PORT", 00:29:00.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.811 "hdgst": ${hdgst:-false}, 00:29:00.811 "ddgst": ${ddgst:-false} 00:29:00.811 }, 00:29:00.811 "method": "bdev_nvme_attach_controller" 00:29:00.811 } 00:29:00.811 EOF 00:29:00.811 )") 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.811 { 00:29:00.811 "params": { 00:29:00.811 "name": "Nvme$subsystem", 00:29:00.811 "trtype": "$TEST_TRANSPORT", 00:29:00.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.811 "adrfam": "ipv4", 00:29:00.811 "trsvcid": "$NVMF_PORT", 00:29:00.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.811 "hdgst": ${hdgst:-false}, 00:29:00.811 "ddgst": ${ddgst:-false} 00:29:00.811 }, 00:29:00.811 "method": "bdev_nvme_attach_controller" 00:29:00.811 } 00:29:00.811 EOF 00:29:00.811 )") 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.811 { 00:29:00.811 "params": { 00:29:00.811 "name": "Nvme$subsystem", 00:29:00.811 "trtype": "$TEST_TRANSPORT", 00:29:00.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.811 "adrfam": "ipv4", 00:29:00.811 "trsvcid": "$NVMF_PORT", 00:29:00.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.811 "hdgst": ${hdgst:-false}, 00:29:00.811 "ddgst": ${ddgst:-false} 00:29:00.811 }, 00:29:00.811 "method": "bdev_nvme_attach_controller" 00:29:00.811 } 00:29:00.811 EOF 00:29:00.811 )") 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.811 { 00:29:00.811 "params": { 00:29:00.811 "name": "Nvme$subsystem", 00:29:00.811 "trtype": "$TEST_TRANSPORT", 00:29:00.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.811 "adrfam": "ipv4", 00:29:00.811 "trsvcid": "$NVMF_PORT", 00:29:00.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.811 "hdgst": ${hdgst:-false}, 00:29:00.811 "ddgst": ${ddgst:-false} 00:29:00.811 }, 00:29:00.811 "method": "bdev_nvme_attach_controller" 00:29:00.811 } 00:29:00.811 EOF 00:29:00.811 )") 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.811 { 00:29:00.811 "params": { 00:29:00.811 "name": "Nvme$subsystem", 00:29:00.811 "trtype": "$TEST_TRANSPORT", 00:29:00.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.811 "adrfam": "ipv4", 00:29:00.811 "trsvcid": "$NVMF_PORT", 00:29:00.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.811 "hdgst": ${hdgst:-false}, 00:29:00.811 "ddgst": ${ddgst:-false} 00:29:00.811 }, 00:29:00.811 "method": "bdev_nvme_attach_controller" 00:29:00.811 } 00:29:00.811 EOF 00:29:00.811 )") 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.811 { 00:29:00.811 "params": { 00:29:00.811 "name": "Nvme$subsystem", 00:29:00.811 "trtype": "$TEST_TRANSPORT", 00:29:00.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.811 "adrfam": "ipv4", 00:29:00.811 "trsvcid": "$NVMF_PORT", 00:29:00.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.811 "hdgst": ${hdgst:-false}, 00:29:00.811 "ddgst": ${ddgst:-false} 00:29:00.811 }, 00:29:00.811 "method": "bdev_nvme_attach_controller" 00:29:00.811 } 00:29:00.811 EOF 00:29:00.811 )") 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.811 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.812 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.812 { 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme$subsystem", 00:29:00.812 "trtype": "$TEST_TRANSPORT", 00:29:00.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "$NVMF_PORT", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.812 "hdgst": ${hdgst:-false}, 00:29:00.812 "ddgst": ${ddgst:-false} 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 } 00:29:00.812 EOF 00:29:00.812 )") 00:29:00.812 [2024-11-27 05:45:57.353845] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:29:00.812 [2024-11-27 05:45:57.353938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470408 ] 00:29:00.812 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.812 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:00.812 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:00.812 05:45:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme1", 00:29:00.812 "trtype": "rdma", 00:29:00.812 "traddr": "192.168.100.8", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "4420", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.812 "hdgst": false, 00:29:00.812 "ddgst": false 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 },{ 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme2", 00:29:00.812 "trtype": "rdma", 00:29:00.812 "traddr": "192.168.100.8", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "4420", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:00.812 "hdgst": false, 00:29:00.812 "ddgst": false 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 },{ 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme3", 00:29:00.812 "trtype": "rdma", 00:29:00.812 "traddr": "192.168.100.8", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "4420", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:00.812 "hdgst": false, 00:29:00.812 "ddgst": false 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 },{ 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme4", 00:29:00.812 "trtype": "rdma", 00:29:00.812 "traddr": "192.168.100.8", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "4420", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:00.812 "hdgst": false, 00:29:00.812 "ddgst": false 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 },{ 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme5", 00:29:00.812 "trtype": "rdma", 00:29:00.812 "traddr": "192.168.100.8", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "4420", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:00.812 "hdgst": false, 00:29:00.812 "ddgst": false 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 },{ 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme6", 00:29:00.812 "trtype": "rdma", 00:29:00.812 "traddr": "192.168.100.8", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "4420", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:00.812 "hdgst": false, 00:29:00.812 "ddgst": false 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 },{ 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme7", 00:29:00.812 "trtype": "rdma", 00:29:00.812 "traddr": "192.168.100.8", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "4420", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:00.812 "hdgst": false, 00:29:00.812 "ddgst": false 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 },{ 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme8", 00:29:00.812 "trtype": "rdma", 00:29:00.812 "traddr": "192.168.100.8", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "4420", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:00.812 "hdgst": false, 00:29:00.812 "ddgst": false 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 },{ 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme9", 00:29:00.812 "trtype": "rdma", 00:29:00.812 "traddr": "192.168.100.8", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "4420", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:00.812 "hdgst": false, 00:29:00.812 "ddgst": false 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 },{ 00:29:00.812 "params": { 00:29:00.812 "name": "Nvme10", 00:29:00.812 "trtype": "rdma", 00:29:00.812 "traddr": "192.168.100.8", 00:29:00.812 "adrfam": "ipv4", 00:29:00.812 "trsvcid": "4420", 00:29:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:00.812 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:00.812 "hdgst": false, 00:29:00.812 "ddgst": false 00:29:00.812 }, 00:29:00.812 "method": "bdev_nvme_attach_controller" 00:29:00.812 }' 00:29:01.071 [2024-11-27 05:45:57.511556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.071 [2024-11-27 05:45:57.616765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.447 Running I/O for 10 seconds... 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.447 05:45:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:02.706 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.706 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=8 00:29:02.706 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 8 -ge 100 ']' 00:29:02.706 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:02.964 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:02.964 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=161 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 161 -ge 100 ']' 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3470067 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3470067 ']' 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3470067 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.965 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3470067 00:29:03.224 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:03.224 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:03.224 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3470067' 00:29:03.224 killing process with pid 3470067 00:29:03.224 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3470067 00:29:03.224 05:45:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3470067 00:29:04.170 2605.00 IOPS, 162.81 MiB/s [2024-11-27T04:46:00.757Z] [2024-11-27 05:46:00.648262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.648330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.648349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.648362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.648375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.648387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.648400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.648412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.650949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.170 [2024-11-27 05:46:00.650978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:04.170 [2024-11-27 05:46:00.651017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.651034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32714 cdw0:0 sqhd:bb60 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.651047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.651061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32714 cdw0:0 sqhd:bb60 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.651073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.651086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32714 cdw0:0 sqhd:bb60 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.651098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.651110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32714 cdw0:0 sqhd:bb60 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.653377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.170 [2024-11-27 05:46:00.653396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:04.170 [2024-11-27 05:46:00.653419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.653437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.653451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.653464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.653476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.653489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.653501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.653513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.655450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.170 [2024-11-27 05:46:00.655470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:04.170 [2024-11-27 05:46:00.655493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.655507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.655522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.655534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.655546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.655558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.655571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.655583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.657705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.170 [2024-11-27 05:46:00.657724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:04.170 [2024-11-27 05:46:00.657746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.657760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.657774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.657787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.657815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.657831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.657848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.657867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.659903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.170 [2024-11-27 05:46:00.659928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:04.170 [2024-11-27 05:46:00.659964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.659981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.659999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.660014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.660032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.660047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.660064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.660079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.661884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.170 [2024-11-27 05:46:00.661906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:04.170 [2024-11-27 05:46:00.661935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.661953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.661970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.661986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.662002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.662018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.662034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.170 [2024-11-27 05:46:00.662050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:04.170 [2024-11-27 05:46:00.664379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.171 [2024-11-27 05:46:00.664403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:04.171 [2024-11-27 05:46:00.664429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.664447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32714 cdw0:0 sqhd:d960 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.664469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.664484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32714 cdw0:0 sqhd:d960 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.664501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.664516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32714 cdw0:0 sqhd:d960 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.664533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.664549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32714 cdw0:0 sqhd:d960 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.667014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.171 [2024-11-27 05:46:00.667037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:04.171 [2024-11-27 05:46:00.667066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.667083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:e2ea80 sqhd:0000 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.667100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.667116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:e2ea80 sqhd:0000 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.667133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.667149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:e2ea80 sqhd:0000 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.667165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.667181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:e2ea80 sqhd:0000 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.669606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.171 [2024-11-27 05:46:00.669635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:04.171 [2024-11-27 05:46:00.669664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.669682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.669699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.669715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.669733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.669749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.669766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.171 [2024-11-27 05:46:00.669782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.672352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.171 [2024-11-27 05:46:00.672376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:04.171 [2024-11-27 05:46:00.674921] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:04.171 [2024-11-27 05:46:00.677294] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:04.171 [2024-11-27 05:46:00.679637] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:04.171 [2024-11-27 05:46:00.682548] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:04.171 [2024-11-27 05:46:00.685261] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:04.171 [2024-11-27 05:46:00.687684] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:04.171 [2024-11-27 05:46:00.689988] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:04.171 [2024-11-27 05:46:00.692223] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:04.171 [2024-11-27 05:46:00.694377] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:04.171 [2024-11-27 05:46:00.694487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf180 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf0c0 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f000 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8ef40 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7ee80 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6edc0 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5ed00 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4ec40 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3eb80 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2eac0 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1ea00 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.694963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0e940 len:0x10000 key:0x184300 00:29:04.171 [2024-11-27 05:46:00.694980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.695002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031effc0 len:0x10000 key:0x184400 00:29:04.171 [2024-11-27 05:46:00.695019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.695042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff00 len:0x10000 key:0x184400 00:29:04.171 [2024-11-27 05:46:00.695059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.695081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cfe40 len:0x10000 key:0x184400 00:29:04.171 [2024-11-27 05:46:00.695098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.695121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfd80 len:0x10000 key:0x184400 00:29:04.171 [2024-11-27 05:46:00.695138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.695160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afcc0 len:0x10000 key:0x184400 00:29:04.171 [2024-11-27 05:46:00.695177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.171 [2024-11-27 05:46:00.695198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fc00 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fb40 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fa80 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316f9c0 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315f900 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314f840 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313f780 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312f6c0 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f600 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f540 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff480 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef3c0 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df300 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf240 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf180 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af0c0 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f000 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308ef40 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.695970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307ee80 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.695987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306edc0 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.696028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305ed00 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.696068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304ec40 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.696109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303eb80 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.696149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302eac0 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.696193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301ea00 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.696232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300e940 len:0x10000 key:0x184400 00:29:04.172 [2024-11-27 05:46:00.696272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033effc0 len:0x10000 key:0x184700 00:29:04.172 [2024-11-27 05:46:00.696314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff00 len:0x10000 key:0x184700 00:29:04.172 [2024-11-27 05:46:00.696353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cfe40 len:0x10000 key:0x184700 00:29:04.172 [2024-11-27 05:46:00.696393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.172 [2024-11-27 05:46:00.696415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfd80 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afcc0 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fc00 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fb40 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fa80 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336f9c0 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335f900 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334f840 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333f780 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332f6c0 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f600 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f540 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff480 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef3c0 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.696962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.696984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df300 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.697001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.697023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf240 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.697040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.697062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf180 len:0x10000 key:0x184700 00:29:04.173 [2024-11-27 05:46:00.697082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.697104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf240 len:0x10000 key:0x184300 00:29:04.173 [2024-11-27 05:46:00.697122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32714 cdw0:0 sqhd:12e0 p:0 m:0 dnr:0 00:29:04.173 [2024-11-27 05:46:00.730115] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.730219] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.730244] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.730261] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.730278] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.730295] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.730311] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.730328] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.730344] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.730361] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.730378] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:04.173 [2024-11-27 05:46:00.738274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:04.173 [2024-11-27 05:46:00.738316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:04.173 [2024-11-27 05:46:00.738341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:04.173 [2024-11-27 05:46:00.738919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:04.173 [2024-11-27 05:46:00.738953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:04.173 [2024-11-27 05:46:00.738971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:04.173 [2024-11-27 05:46:00.742520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:04.173 [2024-11-27 05:46:00.742555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:04.432 task offset: 36608 on job bdev=Nvme1n1 fails 00:29:04.432 00:29:04.432 Latency(us) 00:29:04.432 [2024-11-27T04:46:01.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.432 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.432 Job: Nvme1n1 ended in about 1.96 seconds with error 00:29:04.432 Verification LBA range: start 0x0 length 0x400 00:29:04.432 Nvme1n1 : 1.96 133.37 8.34 32.70 0.00 381646.50 6396.31 1060320.05 00:29:04.432 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.432 Job: Nvme2n1 ended in about 1.96 seconds with error 00:29:04.432 Verification LBA range: start 0x0 length 0x400 00:29:04.432 Nvme2n1 : 1.96 130.76 8.17 32.69 0.00 384219.22 42572.19 1053609.16 00:29:04.433 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.433 Job: Nvme3n1 ended in about 1.96 seconds with error 00:29:04.433 Verification LBA range: start 0x0 length 0x400 00:29:04.433 Nvme3n1 : 1.96 130.70 8.17 32.68 0.00 380968.30 52428.80 1053609.16 00:29:04.433 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.433 Job: Nvme4n1 ended in about 1.96 seconds with error 00:29:04.433 Verification LBA range: start 0x0 length 0x400 00:29:04.433 Nvme4n1 : 1.96 142.38 8.90 32.66 0.00 352513.29 5688.52 1053609.16 00:29:04.433 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.433 Job: Nvme5n1 ended in about 1.96 seconds with error 00:29:04.433 Verification LBA range: start 0x0 length 0x400 00:29:04.433 Nvme5n1 : 1.96 134.67 8.42 32.65 0.00 365570.04 10380.90 1053609.16 00:29:04.433 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.433 Job: Nvme6n1 ended in about 1.96 seconds with error 00:29:04.433 Verification LBA range: start 0x0 length 0x400 00:29:04.433 Nvme6n1 : 1.96 138.69 8.67 32.63 0.00 353928.18 12320.77 1046898.28 00:29:04.433 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.433 Job: Nvme7n1 ended in about 1.96 seconds with error 00:29:04.433 Verification LBA range: start 0x0 length 0x400 00:29:04.433 Nvme7n1 : 1.96 146.78 9.17 32.62 0.00 335029.56 17930.65 1046898.28 00:29:04.433 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.433 Job: Nvme8n1 ended in about 1.96 seconds with error 00:29:04.433 Verification LBA range: start 0x0 length 0x400 00:29:04.433 Nvme8n1 : 1.96 143.16 8.95 32.60 0.00 338818.61 26843.55 1046898.28 00:29:04.433 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.433 Job: Nvme9n1 ended in about 1.96 seconds with error 00:29:04.433 Verification LBA range: start 0x0 length 0x400 00:29:04.433 Nvme9n1 : 1.96 130.36 8.15 32.59 0.00 362043.80 55364.81 1046898.28 00:29:04.433 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.433 Job: Nvme10n1 ended in about 1.92 seconds with error 00:29:04.433 Verification LBA range: start 0x0 length 0x400 00:29:04.433 Nvme10n1 : 1.92 99.82 6.24 33.27 0.00 440150.43 66689.43 1073741.82 00:29:04.433 [2024-11-27T04:46:01.020Z] =================================================================================================================== 00:29:04.433 [2024-11-27T04:46:01.020Z] Total : 1330.69 83.17 327.10 0.00 367331.09 5688.52 1073741.82 00:29:04.433 [2024-11-27 05:46:00.868626] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:04.433 [2024-11-27 05:46:00.868695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:04.433 [2024-11-27 05:46:00.868727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:04.433 [2024-11-27 05:46:00.879757] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.433 [2024-11-27 05:46:00.879790] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.433 [2024-11-27 05:46:00.879806] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:29:04.433 [2024-11-27 05:46:00.879891] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.433 [2024-11-27 05:46:00.879906] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.433 [2024-11-27 05:46:00.879916] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200007fff240 00:29:04.433 [2024-11-27 05:46:00.879995] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.433 [2024-11-27 05:46:00.880013] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.433 [2024-11-27 05:46:00.880023] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177bb200 00:29:04.433 [2024-11-27 05:46:00.886164] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.433 [2024-11-27 05:46:00.886196] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.433 [2024-11-27 05:46:00.886211] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177d3dc0 00:29:04.433 [2024-11-27 05:46:00.886330] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.433 [2024-11-27 05:46:00.886347] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.433 [2024-11-27 05:46:00.886359] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177be180 00:29:04.433 [2024-11-27 05:46:00.886463] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.433 [2024-11-27 05:46:00.886480] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.433 [2024-11-27 05:46:00.886492] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177c7500 00:29:04.433 [2024-11-27 05:46:00.886711] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.433 [2024-11-27 05:46:00.886732] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.433 [2024-11-27 05:46:00.886745] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001778ee00 00:29:04.433 [2024-11-27 05:46:00.886825] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.433 [2024-11-27 05:46:00.886842] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.433 [2024-11-27 05:46:00.886854] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001778e680 00:29:04.433 [2024-11-27 05:46:00.886961] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.433 [2024-11-27 05:46:00.886978] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.433 [2024-11-27 05:46:00.886989] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177a3bc0 00:29:04.433 [2024-11-27 05:46:00.887098] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.433 [2024-11-27 05:46:00.887114] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.433 [2024-11-27 05:46:00.887126] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017783a80 00:29:05.369 [2024-11-27 05:46:01.884090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:05.369 [2024-11-27 05:46:01.884143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:05.369 [2024-11-27 05:46:01.885668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:05.369 [2024-11-27 05:46:01.885687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:05.369 [2024-11-27 05:46:01.886995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:05.369 [2024-11-27 05:46:01.887016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:05.369 [2024-11-27 05:46:01.887069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:05.369 [2024-11-27 05:46:01.887084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:05.369 [2024-11-27 05:46:01.887099] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:29:05.369 [2024-11-27 05:46:01.887120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:05.369 [2024-11-27 05:46:01.887144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:05.370 [2024-11-27 05:46:01.887156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:05.370 [2024-11-27 05:46:01.887169] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:29:05.370 [2024-11-27 05:46:01.887181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:05.370 [2024-11-27 05:46:01.887199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:05.370 [2024-11-27 05:46:01.887210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:05.370 [2024-11-27 05:46:01.887222] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:29:05.370 [2024-11-27 05:46:01.887234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:05.370 [2024-11-27 05:46:01.890367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:05.370 [2024-11-27 05:46:01.890393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:05.370 [2024-11-27 05:46:01.891728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:05.370 [2024-11-27 05:46:01.891748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:05.370 [2024-11-27 05:46:01.893197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:05.370 [2024-11-27 05:46:01.893217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:05.370 [2024-11-27 05:46:01.894520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:05.370 [2024-11-27 05:46:01.894539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:05.370 [2024-11-27 05:46:01.896016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:05.370 [2024-11-27 05:46:01.896037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:05.370 [2024-11-27 05:46:01.897222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:05.370 [2024-11-27 05:46:01.897241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:05.370 [2024-11-27 05:46:01.898453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:05.370 [2024-11-27 05:46:01.898474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:05.370 [2024-11-27 05:46:01.898491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:05.370 [2024-11-27 05:46:01.898506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:05.370 [2024-11-27 05:46:01.898520] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:29:05.370 [2024-11-27 05:46:01.898537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:05.370 [2024-11-27 05:46:01.898560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:05.370 [2024-11-27 05:46:01.898575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:05.370 [2024-11-27 05:46:01.898589] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:29:05.370 [2024-11-27 05:46:01.898602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:05.370 [2024-11-27 05:46:01.898625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:05.370 [2024-11-27 05:46:01.898639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:05.370 [2024-11-27 05:46:01.898652] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:29:05.370 [2024-11-27 05:46:01.898668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:05.370 [2024-11-27 05:46:01.898764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:05.370 [2024-11-27 05:46:01.898782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:05.370 [2024-11-27 05:46:01.898797] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:29:05.370 [2024-11-27 05:46:01.898811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:05.370 [2024-11-27 05:46:01.898829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:05.370 [2024-11-27 05:46:01.898842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:05.370 [2024-11-27 05:46:01.898856] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:29:05.370 [2024-11-27 05:46:01.898870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:05.370 [2024-11-27 05:46:01.898887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:05.370 [2024-11-27 05:46:01.898901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:05.370 [2024-11-27 05:46:01.898914] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:29:05.370 [2024-11-27 05:46:01.898929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:05.370 [2024-11-27 05:46:01.898945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:05.370 [2024-11-27 05:46:01.898958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:05.370 [2024-11-27 05:46:01.898971] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:29:05.370 [2024-11-27 05:46:01.898985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:06.743 05:46:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3470408 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3470408 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3470408 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:07.679 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:07.680 rmmod nvme_rdma 00:29:07.680 rmmod nvme_fabrics 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3470067 ']' 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3470067 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3470067 ']' 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3470067 00:29:07.680 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3470067) - No such process 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3470067 is not found' 00:29:07.680 Process with pid 3470067 is not found 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:07.680 00:29:07.680 real 0m9.426s 00:29:07.680 user 0m34.157s 00:29:07.680 sys 0m1.893s 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.680 ************************************ 00:29:07.680 END TEST nvmf_shutdown_tc3 00:29:07.680 ************************************ 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.680 ************************************ 00:29:07.680 START TEST nvmf_shutdown_tc4 00:29:07.680 ************************************ 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.680 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:07.940 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:07.940 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:07.940 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:07.940 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:07.940 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:07.941 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:07.941 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:07.941 altname enp217s0f0np0 00:29:07.941 altname ens818f0np0 00:29:07.941 inet 192.168.100.8/24 scope global mlx_0_0 00:29:07.941 valid_lft forever preferred_lft forever 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:07.941 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:07.941 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:07.941 altname enp217s0f1np1 00:29:07.941 altname ens818f1np1 00:29:07.941 inet 192.168.100.9/24 scope global mlx_0_1 00:29:07.941 valid_lft forever preferred_lft forever 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:07.941 192.168.100.9' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:07.941 192.168.100.9' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:07.941 192.168.100.9' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:07.941 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3471823 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3471823 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3471823 ']' 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.942 05:46:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:08.200 [2024-11-27 05:46:04.586303] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:29:08.200 [2024-11-27 05:46:04.586399] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.200 [2024-11-27 05:46:04.740091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:08.459 [2024-11-27 05:46:04.840103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.459 [2024-11-27 05:46:04.840148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.459 [2024-11-27 05:46:04.840159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.459 [2024-11-27 05:46:04.840171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.459 [2024-11-27 05:46:04.840181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.459 [2024-11-27 05:46:04.842659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.459 [2024-11-27 05:46:04.842726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.459 [2024-11-27 05:46:04.842768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.459 [2024-11-27 05:46:04.842794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:09.025 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.025 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:09.025 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:09.025 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:09.025 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:09.025 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.025 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:09.025 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.025 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:09.025 [2024-11-27 05:46:05.480959] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fc5e0384940) succeed. 00:29:09.025 [2024-11-27 05:46:05.490624] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fc5e0340940) succeed. 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:09.285 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:09.286 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:09.286 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:09.286 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:09.286 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:09.286 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:09.286 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:09.286 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.286 05:46:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:09.544 Malloc1 00:29:09.544 [2024-11-27 05:46:05.900028] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:09.544 Malloc2 00:29:09.544 Malloc3 00:29:09.804 Malloc4 00:29:09.804 Malloc5 00:29:09.804 Malloc6 00:29:10.063 Malloc7 00:29:10.063 Malloc8 00:29:10.063 Malloc9 00:29:10.321 Malloc10 00:29:10.321 05:46:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.321 05:46:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:10.321 05:46:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.321 05:46:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:10.321 05:46:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3472153 00:29:10.321 05:46:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:29:10.321 05:46:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:10.581 [2024-11-27 05:46:06.914629] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3471823 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3471823 ']' 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3471823 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3471823 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3471823' 00:29:15.848 killing process with pid 3471823 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3471823 00:29:15.848 05:46:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3471823 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 starting I/O failed: -6 00:29:15.848 NVMe io qpair process completion error 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:15.848 starting I/O failed: -6 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 starting I/O failed: -6 00:29:16.787 [2024-11-27 05:46:13.025818] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.787 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 starting I/O failed: -6 00:29:16.788 [2024-11-27 05:46:13.051493] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Submitting Keep Alive failed 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.788 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 [2024-11-27 05:46:13.078537] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 [2024-11-27 05:46:13.103413] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.789 Write completed with error (sct=0, sc=8) 00:29:16.789 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 starting I/O failed: -6 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 [2024-11-27 05:46:13.125180] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.790 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 [2024-11-27 05:46:13.151295] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.791 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 [2024-11-27 05:46:13.177386] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 [2024-11-27 05:46:13.202376] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Submitting Keep Alive failed 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.792 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 starting I/O failed: -6 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 Write completed with error (sct=0, sc=8) 00:29:16.793 [2024-11-27 05:46:13.254536] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:29:16.793 Initializing NVMe Controllers 00:29:16.793 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:29:16.793 Controller IO queue size 128, less than required. 00:29:16.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.793 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:29:16.793 Controller IO queue size 128, less than required. 00:29:16.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.793 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:29:16.793 Controller IO queue size 128, less than required. 00:29:16.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.793 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.793 Controller IO queue size 128, less than required. 00:29:16.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.794 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:29:16.794 Controller IO queue size 128, less than required. 00:29:16.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.794 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:29:16.794 Controller IO queue size 128, less than required. 00:29:16.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.794 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:29:16.794 Controller IO queue size 128, less than required. 00:29:16.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.794 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:29:16.794 Controller IO queue size 128, less than required. 00:29:16.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.794 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:29:16.794 Controller IO queue size 128, less than required. 00:29:16.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.794 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:29:16.794 Controller IO queue size 128, less than required. 00:29:16.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.794 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:16.794 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:16.794 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:16.794 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:16.794 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:16.794 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:16.794 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:16.794 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:16.794 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:16.794 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:16.794 Initialization complete. Launching workers. 00:29:16.794 ======================================================== 00:29:16.794 Latency(us) 00:29:16.794 Device Information : IOPS MiB/s Average min max 00:29:16.794 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1413.23 60.72 90606.82 131.50 1272137.94 00:29:16.794 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1441.93 61.96 89004.68 125.48 1250560.45 00:29:16.794 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1430.04 61.45 90011.54 125.43 1285507.85 00:29:16.794 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1397.94 60.07 92361.78 124.43 1349063.06 00:29:16.794 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1404.74 60.36 92146.54 122.25 1375009.69 00:29:16.794 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1408.64 60.53 92118.23 120.48 1376793.37 00:29:16.794 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1413.40 60.73 92066.53 120.81 1398679.86 00:29:16.794 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1436.83 61.74 88246.99 124.34 1343104.52 00:29:16.794 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1443.28 62.02 87215.66 128.78 1227566.17 00:29:16.794 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1383.17 59.43 94853.83 125.57 1490972.30 00:29:16.794 ======================================================== 00:29:16.794 Total : 14173.19 609.00 90835.15 120.48 1490972.30 00:29:16.794 00:29:16.794 [2024-11-27 05:46:13.276848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:16.794 [2024-11-27 05:46:13.276883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:16.794 [2024-11-27 05:46:13.279062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:16.794 [2024-11-27 05:46:13.279083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:16.794 [2024-11-27 05:46:13.281245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:16.794 [2024-11-27 05:46:13.281264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:16.794 [2024-11-27 05:46:13.283202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:16.794 [2024-11-27 05:46:13.283220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:16.794 [2024-11-27 05:46:13.284981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:16.794 [2024-11-27 05:46:13.285002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:16.794 [2024-11-27 05:46:13.286769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:16.794 [2024-11-27 05:46:13.286788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:16.794 [2024-11-27 05:46:13.288739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:16.794 [2024-11-27 05:46:13.288762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:16.794 [2024-11-27 05:46:13.290556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:16.794 [2024-11-27 05:46:13.290580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:16.794 [2024-11-27 05:46:13.292370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:16.794 [2024-11-27 05:46:13.292394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:16.794 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:19.324 05:46:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3472153 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3472153 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3472153 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:20.262 rmmod nvme_rdma 00:29:20.262 rmmod nvme_fabrics 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3471823 ']' 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3471823 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3471823 ']' 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3471823 00:29:20.262 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3471823) - No such process 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3471823 is not found' 00:29:20.262 Process with pid 3471823 is not found 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:20.262 00:29:20.262 real 0m12.303s 00:29:20.262 user 0m45.943s 00:29:20.262 sys 0m1.593s 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:20.262 ************************************ 00:29:20.262 END TEST nvmf_shutdown_tc4 00:29:20.262 ************************************ 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:20.262 00:29:20.262 real 0m53.774s 00:29:20.262 user 2m53.657s 00:29:20.262 sys 0m13.868s 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:20.262 ************************************ 00:29:20.262 END TEST nvmf_shutdown 00:29:20.262 ************************************ 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:20.262 ************************************ 00:29:20.262 START TEST nvmf_nsid 00:29:20.262 ************************************ 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:29:20.262 * Looking for test storage... 00:29:20.262 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.262 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:20.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.522 --rc genhtml_branch_coverage=1 00:29:20.522 --rc genhtml_function_coverage=1 00:29:20.522 --rc genhtml_legend=1 00:29:20.522 --rc geninfo_all_blocks=1 00:29:20.522 --rc geninfo_unexecuted_blocks=1 00:29:20.522 00:29:20.522 ' 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:20.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.522 --rc genhtml_branch_coverage=1 00:29:20.522 --rc genhtml_function_coverage=1 00:29:20.522 --rc genhtml_legend=1 00:29:20.522 --rc geninfo_all_blocks=1 00:29:20.522 --rc geninfo_unexecuted_blocks=1 00:29:20.522 00:29:20.522 ' 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:20.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.522 --rc genhtml_branch_coverage=1 00:29:20.522 --rc genhtml_function_coverage=1 00:29:20.522 --rc genhtml_legend=1 00:29:20.522 --rc geninfo_all_blocks=1 00:29:20.522 --rc geninfo_unexecuted_blocks=1 00:29:20.522 00:29:20.522 ' 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:20.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.522 --rc genhtml_branch_coverage=1 00:29:20.522 --rc genhtml_function_coverage=1 00:29:20.522 --rc genhtml_legend=1 00:29:20.522 --rc geninfo_all_blocks=1 00:29:20.522 --rc geninfo_unexecuted_blocks=1 00:29:20.522 00:29:20.522 ' 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:20.522 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:20.523 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.523 05:46:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:28.641 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:28.641 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:28.641 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:28.641 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:28.641 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:28.900 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:28.900 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:28.900 altname enp217s0f0np0 00:29:28.900 altname ens818f0np0 00:29:28.900 inet 192.168.100.8/24 scope global mlx_0_0 00:29:28.900 valid_lft forever preferred_lft forever 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:28.900 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:28.900 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:28.900 altname enp217s0f1np1 00:29:28.900 altname ens818f1np1 00:29:28.900 inet 192.168.100.9/24 scope global mlx_0_1 00:29:28.900 valid_lft forever preferred_lft forever 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:28.900 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:28.901 192.168.100.9' 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:28.901 192.168.100.9' 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:28.901 192.168.100.9' 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3477922 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3477922 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3477922 ']' 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.901 05:46:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:29.159 [2024-11-27 05:46:25.542824] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:29:29.159 [2024-11-27 05:46:25.542918] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.159 [2024-11-27 05:46:25.695623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.418 [2024-11-27 05:46:25.799168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.418 [2024-11-27 05:46:25.799211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.418 [2024-11-27 05:46:25.799224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.418 [2024-11-27 05:46:25.799237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.418 [2024-11-27 05:46:25.799247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.418 [2024-11-27 05:46:25.800742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3477956 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=f0cc18a2-3c98-4d31-bbb7-a897515d3816 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=80a0aa42-6b06-4b80-9fca-e0dd268f60bf 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4bdee073-25ae-4857-a916-12bb93028cfb 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:29.985 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.986 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:29.986 null0 00:29:29.986 null1 00:29:29.986 null2 00:29:29.986 [2024-11-27 05:46:26.470085] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000295c0/0x7f6a2460e940) succeed. 00:29:29.986 [2024-11-27 05:46:26.475031] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:29:29.986 [2024-11-27 05:46:26.475116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477956 ] 00:29:29.986 [2024-11-27 05:46:26.478986] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029740/0x7f6a23dbd940) succeed. 00:29:30.245 [2024-11-27 05:46:26.582527] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:30.245 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.245 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3477956 /var/tmp/tgt2.sock 00:29:30.245 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3477956 ']' 00:29:30.245 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:30.245 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.245 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:30.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:30.245 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.245 05:46:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:30.245 [2024-11-27 05:46:26.633028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.245 [2024-11-27 05:46:26.739894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.180 05:46:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.180 05:46:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:31.180 05:46:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:31.439 [2024-11-27 05:46:27.840254] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f24c8dbd940) succeed. 00:29:31.439 [2024-11-27 05:46:27.851678] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f24c8d79940) succeed. 00:29:31.439 [2024-11-27 05:46:27.932035] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:31.439 nvme0n1 nvme0n2 00:29:31.439 nvme1n1 00:29:31.439 05:46:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:31.439 05:46:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:31.440 05:46:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid f0cc18a2-3c98-4d31-bbb7-a897515d3816 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f0cc18a23c984d31bbb7a897515d3816 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F0CC18A23C984D31BBB7A897515D3816 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ F0CC18A23C984D31BBB7A897515D3816 == \F\0\C\C\1\8\A\2\3\C\9\8\4\D\3\1\B\B\B\7\A\8\9\7\5\1\5\D\3\8\1\6 ]] 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 80a0aa42-6b06-4b80-9fca-e0dd268f60bf 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:39.551 05:46:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=80a0aa426b064b809fcae0dd268f60bf 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 80A0AA426B064B809FCAE0DD268F60BF 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 80A0AA426B064B809FCAE0DD268F60BF == \8\0\A\0\A\A\4\2\6\B\0\6\4\B\8\0\9\F\C\A\E\0\D\D\2\6\8\F\6\0\B\F ]] 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4bdee073-25ae-4857-a916-12bb93028cfb 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4bdee07325ae4857a91612bb93028cfb 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4BDEE07325AE4857A91612BB93028CFB 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4BDEE07325AE4857A91612BB93028CFB == \4\B\D\E\E\0\7\3\2\5\A\E\4\8\5\7\A\9\1\6\1\2\B\B\9\3\0\2\8\C\F\B ]] 00:29:39.551 05:46:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3477956 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3477956 ']' 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3477956 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3477956 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3477956' 00:29:46.125 killing process with pid 3477956 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3477956 00:29:46.125 05:46:42 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3477956 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:48.657 rmmod nvme_rdma 00:29:48.657 rmmod nvme_fabrics 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3477922 ']' 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3477922 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3477922 ']' 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3477922 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3477922 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3477922' 00:29:48.657 killing process with pid 3477922 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3477922 00:29:48.657 05:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3477922 00:29:49.592 05:46:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:49.592 05:46:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:49.592 00:29:49.592 real 0m29.255s 00:29:49.592 user 0m40.542s 00:29:49.592 sys 0m8.287s 00:29:49.592 05:46:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.592 05:46:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:49.592 ************************************ 00:29:49.592 END TEST nvmf_nsid 00:29:49.592 ************************************ 00:29:49.592 05:46:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:49.592 00:29:49.592 real 17m29.722s 00:29:49.592 user 51m34.109s 00:29:49.592 sys 3m48.978s 00:29:49.592 05:46:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.592 05:46:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:49.592 ************************************ 00:29:49.593 END TEST nvmf_target_extra 00:29:49.593 ************************************ 00:29:49.593 05:46:46 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:29:49.593 05:46:46 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:49.593 05:46:46 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.593 05:46:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:49.593 ************************************ 00:29:49.593 START TEST nvmf_host 00:29:49.593 ************************************ 00:29:49.593 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:29:49.593 * Looking for test storage... 00:29:49.593 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:29:49.593 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:49.593 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:49.593 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:49.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.852 --rc genhtml_branch_coverage=1 00:29:49.852 --rc genhtml_function_coverage=1 00:29:49.852 --rc genhtml_legend=1 00:29:49.852 --rc geninfo_all_blocks=1 00:29:49.852 --rc geninfo_unexecuted_blocks=1 00:29:49.852 00:29:49.852 ' 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:49.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.852 --rc genhtml_branch_coverage=1 00:29:49.852 --rc genhtml_function_coverage=1 00:29:49.852 --rc genhtml_legend=1 00:29:49.852 --rc geninfo_all_blocks=1 00:29:49.852 --rc geninfo_unexecuted_blocks=1 00:29:49.852 00:29:49.852 ' 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:49.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.852 --rc genhtml_branch_coverage=1 00:29:49.852 --rc genhtml_function_coverage=1 00:29:49.852 --rc genhtml_legend=1 00:29:49.852 --rc geninfo_all_blocks=1 00:29:49.852 --rc geninfo_unexecuted_blocks=1 00:29:49.852 00:29:49.852 ' 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:49.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.852 --rc genhtml_branch_coverage=1 00:29:49.852 --rc genhtml_function_coverage=1 00:29:49.852 --rc genhtml_legend=1 00:29:49.852 --rc geninfo_all_blocks=1 00:29:49.852 --rc geninfo_unexecuted_blocks=1 00:29:49.852 00:29:49.852 ' 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.852 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:49.853 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.853 ************************************ 00:29:49.853 START TEST nvmf_multicontroller 00:29:49.853 ************************************ 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:29:49.853 * Looking for test storage... 00:29:49.853 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:49.853 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:50.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.113 --rc genhtml_branch_coverage=1 00:29:50.113 --rc genhtml_function_coverage=1 00:29:50.113 --rc genhtml_legend=1 00:29:50.113 --rc geninfo_all_blocks=1 00:29:50.113 --rc geninfo_unexecuted_blocks=1 00:29:50.113 00:29:50.113 ' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:50.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.113 --rc genhtml_branch_coverage=1 00:29:50.113 --rc genhtml_function_coverage=1 00:29:50.113 --rc genhtml_legend=1 00:29:50.113 --rc geninfo_all_blocks=1 00:29:50.113 --rc geninfo_unexecuted_blocks=1 00:29:50.113 00:29:50.113 ' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:50.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.113 --rc genhtml_branch_coverage=1 00:29:50.113 --rc genhtml_function_coverage=1 00:29:50.113 --rc genhtml_legend=1 00:29:50.113 --rc geninfo_all_blocks=1 00:29:50.113 --rc geninfo_unexecuted_blocks=1 00:29:50.113 00:29:50.113 ' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:50.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.113 --rc genhtml_branch_coverage=1 00:29:50.113 --rc genhtml_function_coverage=1 00:29:50.113 --rc genhtml_legend=1 00:29:50.113 --rc geninfo_all_blocks=1 00:29:50.113 --rc geninfo_unexecuted_blocks=1 00:29:50.113 00:29:50.113 ' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.113 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.113 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:29:50.114 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:29:50.114 00:29:50.114 real 0m0.223s 00:29:50.114 user 0m0.126s 00:29:50.114 sys 0m0.115s 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:50.114 ************************************ 00:29:50.114 END TEST nvmf_multicontroller 00:29:50.114 ************************************ 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.114 ************************************ 00:29:50.114 START TEST nvmf_aer 00:29:50.114 ************************************ 00:29:50.114 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:29:50.373 * Looking for test storage... 00:29:50.373 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.373 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:50.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.373 --rc genhtml_branch_coverage=1 00:29:50.373 --rc genhtml_function_coverage=1 00:29:50.373 --rc genhtml_legend=1 00:29:50.373 --rc geninfo_all_blocks=1 00:29:50.373 --rc geninfo_unexecuted_blocks=1 00:29:50.373 00:29:50.373 ' 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:50.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.374 --rc genhtml_branch_coverage=1 00:29:50.374 --rc genhtml_function_coverage=1 00:29:50.374 --rc genhtml_legend=1 00:29:50.374 --rc geninfo_all_blocks=1 00:29:50.374 --rc geninfo_unexecuted_blocks=1 00:29:50.374 00:29:50.374 ' 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:50.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.374 --rc genhtml_branch_coverage=1 00:29:50.374 --rc genhtml_function_coverage=1 00:29:50.374 --rc genhtml_legend=1 00:29:50.374 --rc geninfo_all_blocks=1 00:29:50.374 --rc geninfo_unexecuted_blocks=1 00:29:50.374 00:29:50.374 ' 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:50.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.374 --rc genhtml_branch_coverage=1 00:29:50.374 --rc genhtml_function_coverage=1 00:29:50.374 --rc genhtml_legend=1 00:29:50.374 --rc geninfo_all_blocks=1 00:29:50.374 --rc geninfo_unexecuted_blocks=1 00:29:50.374 00:29:50.374 ' 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.374 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.374 05:46:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.345 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:00.346 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:00.346 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:00.346 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:00.346 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:00.346 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:00.347 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:00.347 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:00.347 altname enp217s0f0np0 00:30:00.347 altname ens818f0np0 00:30:00.347 inet 192.168.100.8/24 scope global mlx_0_0 00:30:00.347 valid_lft forever preferred_lft forever 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:00.347 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:00.347 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:00.347 altname enp217s0f1np1 00:30:00.347 altname ens818f1np1 00:30:00.347 inet 192.168.100.9/24 scope global mlx_0_1 00:30:00.347 valid_lft forever preferred_lft forever 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:00.347 192.168.100.9' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:00.347 192.168.100.9' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:00.347 192.168.100.9' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3485487 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3485487 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3485487 ']' 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.347 05:46:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.347 [2024-11-27 05:46:55.548376] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:00.347 [2024-11-27 05:46:55.548492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.347 [2024-11-27 05:46:55.703023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.347 [2024-11-27 05:46:55.803095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.347 [2024-11-27 05:46:55.803148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.347 [2024-11-27 05:46:55.803161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.347 [2024-11-27 05:46:55.803174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.347 [2024-11-27 05:46:55.803185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.347 [2024-11-27 05:46:55.805837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.347 [2024-11-27 05:46:55.805861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.347 [2024-11-27 05:46:55.805922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.347 [2024-11-27 05:46:55.805929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.347 [2024-11-27 05:46:56.455418] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f8302f76940) succeed. 00:30:00.347 [2024-11-27 05:46:56.464984] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f8302f32940) succeed. 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.347 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.347 Malloc0 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.348 [2024-11-27 05:46:56.817600] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.348 [ 00:30:00.348 { 00:30:00.348 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:00.348 "subtype": "Discovery", 00:30:00.348 "listen_addresses": [], 00:30:00.348 "allow_any_host": true, 00:30:00.348 "hosts": [] 00:30:00.348 }, 00:30:00.348 { 00:30:00.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.348 "subtype": "NVMe", 00:30:00.348 "listen_addresses": [ 00:30:00.348 { 00:30:00.348 "trtype": "RDMA", 00:30:00.348 "adrfam": "IPv4", 00:30:00.348 "traddr": "192.168.100.8", 00:30:00.348 "trsvcid": "4420" 00:30:00.348 } 00:30:00.348 ], 00:30:00.348 "allow_any_host": true, 00:30:00.348 "hosts": [], 00:30:00.348 "serial_number": "SPDK00000000000001", 00:30:00.348 "model_number": "SPDK bdev Controller", 00:30:00.348 "max_namespaces": 2, 00:30:00.348 "min_cntlid": 1, 00:30:00.348 "max_cntlid": 65519, 00:30:00.348 "namespaces": [ 00:30:00.348 { 00:30:00.348 "nsid": 1, 00:30:00.348 "bdev_name": "Malloc0", 00:30:00.348 "name": "Malloc0", 00:30:00.348 "nguid": "FDB3B6CF2F864B5DAAECCB7904D5A5F8", 00:30:00.348 "uuid": "fdb3b6cf-2f86-4b5d-aaec-cb7904d5a5f8" 00:30:00.348 } 00:30:00.348 ] 00:30:00.348 } 00:30:00.348 ] 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3485775 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:00.348 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:00.607 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.607 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:00.607 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:00.607 05:46:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:00.607 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.607 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:00.607 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:00.607 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:00.607 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.607 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:30:00.607 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:30:00.607 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.865 Malloc1 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:00.865 [ 00:30:00.865 { 00:30:00.865 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:00.865 "subtype": "Discovery", 00:30:00.865 "listen_addresses": [], 00:30:00.865 "allow_any_host": true, 00:30:00.865 "hosts": [] 00:30:00.865 }, 00:30:00.865 { 00:30:00.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.865 "subtype": "NVMe", 00:30:00.865 "listen_addresses": [ 00:30:00.865 { 00:30:00.865 "trtype": "RDMA", 00:30:00.865 "adrfam": "IPv4", 00:30:00.865 "traddr": "192.168.100.8", 00:30:00.865 "trsvcid": "4420" 00:30:00.865 } 00:30:00.865 ], 00:30:00.865 "allow_any_host": true, 00:30:00.865 "hosts": [], 00:30:00.865 "serial_number": "SPDK00000000000001", 00:30:00.865 "model_number": "SPDK bdev Controller", 00:30:00.865 "max_namespaces": 2, 00:30:00.865 "min_cntlid": 1, 00:30:00.865 "max_cntlid": 65519, 00:30:00.865 "namespaces": [ 00:30:00.865 { 00:30:00.865 "nsid": 1, 00:30:00.865 "bdev_name": "Malloc0", 00:30:00.865 "name": "Malloc0", 00:30:00.865 "nguid": "FDB3B6CF2F864B5DAAECCB7904D5A5F8", 00:30:00.865 "uuid": "fdb3b6cf-2f86-4b5d-aaec-cb7904d5a5f8" 00:30:00.865 }, 00:30:00.865 { 00:30:00.865 "nsid": 2, 00:30:00.865 "bdev_name": "Malloc1", 00:30:00.865 "name": "Malloc1", 00:30:00.865 "nguid": "FF4777F312B34F2CA7E289978E307CB0", 00:30:00.865 "uuid": "ff4777f3-12b3-4f2c-a7e2-89978e307cb0" 00:30:00.865 } 00:30:00.865 ] 00:30:00.865 } 00:30:00.865 ] 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.865 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3485775 00:30:01.122 Asynchronous Event Request test 00:30:01.122 Attaching to 192.168.100.8 00:30:01.122 Attached to 192.168.100.8 00:30:01.122 Registering asynchronous event callbacks... 00:30:01.122 Starting namespace attribute notice tests for all controllers... 00:30:01.122 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:01.122 aer_cb - Changed Namespace 00:30:01.122 Cleaning up... 00:30:01.122 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:01.122 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.122 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:01.378 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.378 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:01.378 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.378 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:01.378 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.378 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.378 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.378 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:01.379 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.379 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:01.379 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:01.379 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.379 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:01.379 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:01.379 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:01.379 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:01.379 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.379 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:01.379 rmmod nvme_rdma 00:30:01.636 rmmod nvme_fabrics 00:30:01.636 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.636 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:01.636 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:01.636 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3485487 ']' 00:30:01.636 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3485487 00:30:01.636 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3485487 ']' 00:30:01.636 05:46:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3485487 00:30:01.636 05:46:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:01.636 05:46:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.636 05:46:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3485487 00:30:01.636 05:46:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:01.636 05:46:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:01.636 05:46:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3485487' 00:30:01.636 killing process with pid 3485487 00:30:01.636 05:46:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3485487 00:30:01.636 05:46:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3485487 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:03.532 00:30:03.532 real 0m13.099s 00:30:03.532 user 0m16.264s 00:30:03.532 sys 0m7.501s 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.532 ************************************ 00:30:03.532 END TEST nvmf_aer 00:30:03.532 ************************************ 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.532 ************************************ 00:30:03.532 START TEST nvmf_async_init 00:30:03.532 ************************************ 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:30:03.532 * Looking for test storage... 00:30:03.532 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.532 05:46:59 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.532 --rc genhtml_branch_coverage=1 00:30:03.532 --rc genhtml_function_coverage=1 00:30:03.532 --rc genhtml_legend=1 00:30:03.532 --rc geninfo_all_blocks=1 00:30:03.532 --rc geninfo_unexecuted_blocks=1 00:30:03.532 00:30:03.532 ' 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.532 --rc genhtml_branch_coverage=1 00:30:03.532 --rc genhtml_function_coverage=1 00:30:03.532 --rc genhtml_legend=1 00:30:03.532 --rc geninfo_all_blocks=1 00:30:03.532 --rc geninfo_unexecuted_blocks=1 00:30:03.532 00:30:03.532 ' 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.532 --rc genhtml_branch_coverage=1 00:30:03.532 --rc genhtml_function_coverage=1 00:30:03.532 --rc genhtml_legend=1 00:30:03.532 --rc geninfo_all_blocks=1 00:30:03.532 --rc geninfo_unexecuted_blocks=1 00:30:03.532 00:30:03.532 ' 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.532 --rc genhtml_branch_coverage=1 00:30:03.532 --rc genhtml_function_coverage=1 00:30:03.532 --rc genhtml_legend=1 00:30:03.532 --rc geninfo_all_blocks=1 00:30:03.532 --rc geninfo_unexecuted_blocks=1 00:30:03.532 00:30:03.532 ' 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.532 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:03.533 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=c541deb49a31449896d70d27c653fe7d 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:03.533 05:47:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:13.511 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:13.512 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:13.512 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:13.512 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:13.512 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:13.512 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:13.512 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:13.512 altname enp217s0f0np0 00:30:13.512 altname ens818f0np0 00:30:13.512 inet 192.168.100.8/24 scope global mlx_0_0 00:30:13.512 valid_lft forever preferred_lft forever 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:13.512 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:13.512 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:13.512 altname enp217s0f1np1 00:30:13.512 altname ens818f1np1 00:30:13.512 inet 192.168.100.9/24 scope global mlx_0_1 00:30:13.512 valid_lft forever preferred_lft forever 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.512 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:13.513 192.168.100.9' 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:13.513 192.168.100.9' 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:13.513 192.168.100.9' 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3490226 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3490226 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3490226 ']' 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:13.513 05:47:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 [2024-11-27 05:47:08.726038] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:13.513 [2024-11-27 05:47:08.726135] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.513 [2024-11-27 05:47:08.881543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.513 [2024-11-27 05:47:08.974602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.513 [2024-11-27 05:47:08.974655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.513 [2024-11-27 05:47:08.974667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.513 [2024-11-27 05:47:08.974680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.513 [2024-11-27 05:47:08.974689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.513 [2024-11-27 05:47:08.975946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 [2024-11-27 05:47:09.585596] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7fdef3d08940) succeed. 00:30:13.513 [2024-11-27 05:47:09.595093] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7fdef33bd940) succeed. 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 null0 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c541deb49a31449896d70d27c653fe7d 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 [2024-11-27 05:47:09.712516] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 nvme0n1 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.513 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.513 [ 00:30:13.513 { 00:30:13.513 "name": "nvme0n1", 00:30:13.513 "aliases": [ 00:30:13.513 "c541deb4-9a31-4498-96d7-0d27c653fe7d" 00:30:13.513 ], 00:30:13.513 "product_name": "NVMe disk", 00:30:13.513 "block_size": 512, 00:30:13.513 "num_blocks": 2097152, 00:30:13.513 "uuid": "c541deb4-9a31-4498-96d7-0d27c653fe7d", 00:30:13.513 "numa_id": 1, 00:30:13.513 "assigned_rate_limits": { 00:30:13.513 "rw_ios_per_sec": 0, 00:30:13.513 "rw_mbytes_per_sec": 0, 00:30:13.513 "r_mbytes_per_sec": 0, 00:30:13.513 "w_mbytes_per_sec": 0 00:30:13.513 }, 00:30:13.513 "claimed": false, 00:30:13.513 "zoned": false, 00:30:13.513 "supported_io_types": { 00:30:13.513 "read": true, 00:30:13.513 "write": true, 00:30:13.513 "unmap": false, 00:30:13.513 "flush": true, 00:30:13.513 "reset": true, 00:30:13.513 "nvme_admin": true, 00:30:13.513 "nvme_io": true, 00:30:13.513 "nvme_io_md": false, 00:30:13.513 "write_zeroes": true, 00:30:13.513 "zcopy": false, 00:30:13.513 "get_zone_info": false, 00:30:13.513 "zone_management": false, 00:30:13.513 "zone_append": false, 00:30:13.513 "compare": true, 00:30:13.513 "compare_and_write": true, 00:30:13.513 "abort": true, 00:30:13.513 "seek_hole": false, 00:30:13.513 "seek_data": false, 00:30:13.513 "copy": true, 00:30:13.514 "nvme_iov_md": false 00:30:13.514 }, 00:30:13.514 "memory_domains": [ 00:30:13.514 { 00:30:13.514 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:13.514 "dma_device_type": 0 00:30:13.514 } 00:30:13.514 ], 00:30:13.514 "driver_specific": { 00:30:13.514 "nvme": [ 00:30:13.514 { 00:30:13.514 "trid": { 00:30:13.514 "trtype": "RDMA", 00:30:13.514 "adrfam": "IPv4", 00:30:13.514 "traddr": "192.168.100.8", 00:30:13.514 "trsvcid": "4420", 00:30:13.514 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:13.514 }, 00:30:13.514 "ctrlr_data": { 00:30:13.514 "cntlid": 1, 00:30:13.514 "vendor_id": "0x8086", 00:30:13.514 "model_number": "SPDK bdev Controller", 00:30:13.514 "serial_number": "00000000000000000000", 00:30:13.514 "firmware_revision": "25.01", 00:30:13.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.514 "oacs": { 00:30:13.514 "security": 0, 00:30:13.514 "format": 0, 00:30:13.514 "firmware": 0, 00:30:13.514 "ns_manage": 0 00:30:13.514 }, 00:30:13.514 "multi_ctrlr": true, 00:30:13.514 "ana_reporting": false 00:30:13.514 }, 00:30:13.514 "vs": { 00:30:13.514 "nvme_version": "1.3" 00:30:13.514 }, 00:30:13.514 "ns_data": { 00:30:13.514 "id": 1, 00:30:13.514 "can_share": true 00:30:13.514 } 00:30:13.514 } 00:30:13.514 ], 00:30:13.514 "mp_policy": "active_passive" 00:30:13.514 } 00:30:13.514 } 00:30:13.514 ] 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.514 [2024-11-27 05:47:09.826129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:13.514 [2024-11-27 05:47:09.865259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.514 [2024-11-27 05:47:09.891452] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.514 [ 00:30:13.514 { 00:30:13.514 "name": "nvme0n1", 00:30:13.514 "aliases": [ 00:30:13.514 "c541deb4-9a31-4498-96d7-0d27c653fe7d" 00:30:13.514 ], 00:30:13.514 "product_name": "NVMe disk", 00:30:13.514 "block_size": 512, 00:30:13.514 "num_blocks": 2097152, 00:30:13.514 "uuid": "c541deb4-9a31-4498-96d7-0d27c653fe7d", 00:30:13.514 "numa_id": 1, 00:30:13.514 "assigned_rate_limits": { 00:30:13.514 "rw_ios_per_sec": 0, 00:30:13.514 "rw_mbytes_per_sec": 0, 00:30:13.514 "r_mbytes_per_sec": 0, 00:30:13.514 "w_mbytes_per_sec": 0 00:30:13.514 }, 00:30:13.514 "claimed": false, 00:30:13.514 "zoned": false, 00:30:13.514 "supported_io_types": { 00:30:13.514 "read": true, 00:30:13.514 "write": true, 00:30:13.514 "unmap": false, 00:30:13.514 "flush": true, 00:30:13.514 "reset": true, 00:30:13.514 "nvme_admin": true, 00:30:13.514 "nvme_io": true, 00:30:13.514 "nvme_io_md": false, 00:30:13.514 "write_zeroes": true, 00:30:13.514 "zcopy": false, 00:30:13.514 "get_zone_info": false, 00:30:13.514 "zone_management": false, 00:30:13.514 "zone_append": false, 00:30:13.514 "compare": true, 00:30:13.514 "compare_and_write": true, 00:30:13.514 "abort": true, 00:30:13.514 "seek_hole": false, 00:30:13.514 "seek_data": false, 00:30:13.514 "copy": true, 00:30:13.514 "nvme_iov_md": false 00:30:13.514 }, 00:30:13.514 "memory_domains": [ 00:30:13.514 { 00:30:13.514 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:13.514 "dma_device_type": 0 00:30:13.514 } 00:30:13.514 ], 00:30:13.514 "driver_specific": { 00:30:13.514 "nvme": [ 00:30:13.514 { 00:30:13.514 "trid": { 00:30:13.514 "trtype": "RDMA", 00:30:13.514 "adrfam": "IPv4", 00:30:13.514 "traddr": "192.168.100.8", 00:30:13.514 "trsvcid": "4420", 00:30:13.514 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:13.514 }, 00:30:13.514 "ctrlr_data": { 00:30:13.514 "cntlid": 2, 00:30:13.514 "vendor_id": "0x8086", 00:30:13.514 "model_number": "SPDK bdev Controller", 00:30:13.514 "serial_number": "00000000000000000000", 00:30:13.514 "firmware_revision": "25.01", 00:30:13.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.514 "oacs": { 00:30:13.514 "security": 0, 00:30:13.514 "format": 0, 00:30:13.514 "firmware": 0, 00:30:13.514 "ns_manage": 0 00:30:13.514 }, 00:30:13.514 "multi_ctrlr": true, 00:30:13.514 "ana_reporting": false 00:30:13.514 }, 00:30:13.514 "vs": { 00:30:13.514 "nvme_version": "1.3" 00:30:13.514 }, 00:30:13.514 "ns_data": { 00:30:13.514 "id": 1, 00:30:13.514 "can_share": true 00:30:13.514 } 00:30:13.514 } 00:30:13.514 ], 00:30:13.514 "mp_policy": "active_passive" 00:30:13.514 } 00:30:13.514 } 00:30:13.514 ] 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UFgVa3Rvxn 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UFgVa3Rvxn 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.UFgVa3Rvxn 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.514 [2024-11-27 05:47:09.990755] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.514 05:47:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.514 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.514 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:13.514 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.514 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.514 [2024-11-27 05:47:10.006775] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:13.514 nvme0n1 00:30:13.514 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.514 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:13.514 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.514 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.514 [ 00:30:13.514 { 00:30:13.514 "name": "nvme0n1", 00:30:13.514 "aliases": [ 00:30:13.514 "c541deb4-9a31-4498-96d7-0d27c653fe7d" 00:30:13.514 ], 00:30:13.514 "product_name": "NVMe disk", 00:30:13.514 "block_size": 512, 00:30:13.514 "num_blocks": 2097152, 00:30:13.514 "uuid": "c541deb4-9a31-4498-96d7-0d27c653fe7d", 00:30:13.514 "numa_id": 1, 00:30:13.514 "assigned_rate_limits": { 00:30:13.514 "rw_ios_per_sec": 0, 00:30:13.514 "rw_mbytes_per_sec": 0, 00:30:13.514 "r_mbytes_per_sec": 0, 00:30:13.514 "w_mbytes_per_sec": 0 00:30:13.514 }, 00:30:13.514 "claimed": false, 00:30:13.514 "zoned": false, 00:30:13.514 "supported_io_types": { 00:30:13.514 "read": true, 00:30:13.514 "write": true, 00:30:13.514 "unmap": false, 00:30:13.514 "flush": true, 00:30:13.514 "reset": true, 00:30:13.514 "nvme_admin": true, 00:30:13.514 "nvme_io": true, 00:30:13.514 "nvme_io_md": false, 00:30:13.514 "write_zeroes": true, 00:30:13.514 "zcopy": false, 00:30:13.514 "get_zone_info": false, 00:30:13.514 "zone_management": false, 00:30:13.514 "zone_append": false, 00:30:13.514 "compare": true, 00:30:13.774 "compare_and_write": true, 00:30:13.774 "abort": true, 00:30:13.774 "seek_hole": false, 00:30:13.774 "seek_data": false, 00:30:13.774 "copy": true, 00:30:13.774 "nvme_iov_md": false 00:30:13.774 }, 00:30:13.774 "memory_domains": [ 00:30:13.774 { 00:30:13.774 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:13.774 "dma_device_type": 0 00:30:13.774 } 00:30:13.774 ], 00:30:13.774 "driver_specific": { 00:30:13.774 "nvme": [ 00:30:13.774 { 00:30:13.774 "trid": { 00:30:13.774 "trtype": "RDMA", 00:30:13.774 "adrfam": "IPv4", 00:30:13.774 "traddr": "192.168.100.8", 00:30:13.774 "trsvcid": "4421", 00:30:13.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:13.774 }, 00:30:13.774 "ctrlr_data": { 00:30:13.774 "cntlid": 3, 00:30:13.774 "vendor_id": "0x8086", 00:30:13.774 "model_number": "SPDK bdev Controller", 00:30:13.774 "serial_number": "00000000000000000000", 00:30:13.774 "firmware_revision": "25.01", 00:30:13.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.774 "oacs": { 00:30:13.774 "security": 0, 00:30:13.774 "format": 0, 00:30:13.774 "firmware": 0, 00:30:13.774 "ns_manage": 0 00:30:13.774 }, 00:30:13.774 "multi_ctrlr": true, 00:30:13.774 "ana_reporting": false 00:30:13.774 }, 00:30:13.774 "vs": { 00:30:13.774 "nvme_version": "1.3" 00:30:13.774 }, 00:30:13.774 "ns_data": { 00:30:13.774 "id": 1, 00:30:13.774 "can_share": true 00:30:13.774 } 00:30:13.774 } 00:30:13.774 ], 00:30:13.774 "mp_policy": "active_passive" 00:30:13.774 } 00:30:13.774 } 00:30:13.774 ] 00:30:13.774 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.774 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.774 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.774 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.774 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.774 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.UFgVa3Rvxn 00:30:13.774 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:13.774 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:13.774 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:13.775 rmmod nvme_rdma 00:30:13.775 rmmod nvme_fabrics 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3490226 ']' 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3490226 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3490226 ']' 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3490226 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3490226 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3490226' 00:30:13.775 killing process with pid 3490226 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3490226 00:30:13.775 05:47:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3490226 00:30:14.712 05:47:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:14.712 05:47:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:14.712 00:30:14.712 real 0m11.482s 00:30:14.712 user 0m5.146s 00:30:14.712 sys 0m7.151s 00:30:14.713 05:47:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.713 05:47:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.713 ************************************ 00:30:14.713 END TEST nvmf_async_init 00:30:14.713 ************************************ 00:30:14.971 05:47:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:30:14.971 05:47:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:14.971 05:47:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.972 ************************************ 00:30:14.972 START TEST dma 00:30:14.972 ************************************ 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:30:14.972 * Looking for test storage... 00:30:14.972 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:14.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.972 --rc genhtml_branch_coverage=1 00:30:14.972 --rc genhtml_function_coverage=1 00:30:14.972 --rc genhtml_legend=1 00:30:14.972 --rc geninfo_all_blocks=1 00:30:14.972 --rc geninfo_unexecuted_blocks=1 00:30:14.972 00:30:14.972 ' 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:14.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.972 --rc genhtml_branch_coverage=1 00:30:14.972 --rc genhtml_function_coverage=1 00:30:14.972 --rc genhtml_legend=1 00:30:14.972 --rc geninfo_all_blocks=1 00:30:14.972 --rc geninfo_unexecuted_blocks=1 00:30:14.972 00:30:14.972 ' 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:14.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.972 --rc genhtml_branch_coverage=1 00:30:14.972 --rc genhtml_function_coverage=1 00:30:14.972 --rc genhtml_legend=1 00:30:14.972 --rc geninfo_all_blocks=1 00:30:14.972 --rc geninfo_unexecuted_blocks=1 00:30:14.972 00:30:14.972 ' 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:14.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.972 --rc genhtml_branch_coverage=1 00:30:14.972 --rc genhtml_function_coverage=1 00:30:14.972 --rc genhtml_legend=1 00:30:14.972 --rc geninfo_all_blocks=1 00:30:14.972 --rc geninfo_unexecuted_blocks=1 00:30:14.972 00:30:14.972 ' 00:30:14.972 05:47:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.231 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:15.232 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.232 05:47:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:23.487 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:23.487 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:23.488 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:23.488 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:23.488 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:23.488 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:23.488 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:23.488 altname enp217s0f0np0 00:30:23.488 altname ens818f0np0 00:30:23.488 inet 192.168.100.8/24 scope global mlx_0_0 00:30:23.488 valid_lft forever preferred_lft forever 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:23.488 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:23.488 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:23.488 altname enp217s0f1np1 00:30:23.488 altname ens818f1np1 00:30:23.488 inet 192.168.100.9/24 scope global mlx_0_1 00:30:23.488 valid_lft forever preferred_lft forever 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:23.488 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:23.489 192.168.100.9' 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:23.489 192.168.100.9' 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:23.489 192.168.100.9' 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=3494682 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 3494682 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 3494682 ']' 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.489 05:47:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.489 [2024-11-27 05:47:19.385407] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:23.489 [2024-11-27 05:47:19.385502] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.489 [2024-11-27 05:47:19.540209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:23.489 [2024-11-27 05:47:19.636566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.489 [2024-11-27 05:47:19.636616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.489 [2024-11-27 05:47:19.636629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.489 [2024-11-27 05:47:19.636643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.489 [2024-11-27 05:47:19.636653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.489 [2024-11-27 05:47:19.638670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.489 [2024-11-27 05:47:19.638676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.748 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.748 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:30:23.748 05:47:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:23.748 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:23.748 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.748 05:47:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.748 05:47:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:23.748 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.748 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.748 [2024-11-27 05:47:20.257303] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7feaca37d940) succeed. 00:30:23.748 [2024-11-27 05:47:20.266644] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7feaca339940) succeed. 00:30:24.006 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.006 05:47:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:30:24.006 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.006 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:24.264 Malloc0 00:30:24.264 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.264 05:47:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:30:24.264 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.264 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:24.264 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.264 05:47:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:30:24.264 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:24.265 [2024-11-27 05:47:20.694561] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:24.265 { 00:30:24.265 "params": { 00:30:24.265 "name": "Nvme$subsystem", 00:30:24.265 "trtype": "$TEST_TRANSPORT", 00:30:24.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.265 "adrfam": "ipv4", 00:30:24.265 "trsvcid": "$NVMF_PORT", 00:30:24.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.265 "hdgst": ${hdgst:-false}, 00:30:24.265 "ddgst": ${ddgst:-false} 00:30:24.265 }, 00:30:24.265 "method": "bdev_nvme_attach_controller" 00:30:24.265 } 00:30:24.265 EOF 00:30:24.265 )") 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:30:24.265 05:47:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:24.265 "params": { 00:30:24.265 "name": "Nvme0", 00:30:24.265 "trtype": "rdma", 00:30:24.265 "traddr": "192.168.100.8", 00:30:24.265 "adrfam": "ipv4", 00:30:24.265 "trsvcid": "4420", 00:30:24.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:24.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:24.265 "hdgst": false, 00:30:24.265 "ddgst": false 00:30:24.265 }, 00:30:24.265 "method": "bdev_nvme_attach_controller" 00:30:24.265 }' 00:30:24.265 [2024-11-27 05:47:20.779308] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:24.265 [2024-11-27 05:47:20.779399] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494940 ] 00:30:24.523 [2024-11-27 05:47:20.932465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:24.523 [2024-11-27 05:47:21.034817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.523 [2024-11-27 05:47:21.034825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:31.083 bdev Nvme0n1 reports 1 memory domains 00:30:31.083 bdev Nvme0n1 supports RDMA memory domain 00:30:31.083 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:31.083 ========================================================================== 00:30:31.083 Latency [us] 00:30:31.083 IOPS MiB/s Average min max 00:30:31.083 Core 2: 19318.96 75.46 827.43 288.47 12647.53 00:30:31.083 Core 3: 19133.18 74.74 835.49 281.69 13062.37 00:30:31.083 ========================================================================== 00:30:31.083 Total : 38452.14 150.20 831.44 281.69 13062.37 00:30:31.083 00:30:31.083 Total operations: 192289, translate 192289 pull_push 0 memzero 0 00:30:31.083 05:47:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:30:31.083 05:47:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:30:31.083 05:47:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:30:31.083 [2024-11-27 05:47:27.444317] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:31.084 [2024-11-27 05:47:27.444411] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3496046 ] 00:30:31.084 [2024-11-27 05:47:27.593207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:31.343 [2024-11-27 05:47:27.699032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:31.343 [2024-11-27 05:47:27.699040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:37.908 bdev Malloc0 reports 2 memory domains 00:30:37.908 bdev Malloc0 doesn't support RDMA memory domain 00:30:37.908 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:37.908 ========================================================================== 00:30:37.908 Latency [us] 00:30:37.908 IOPS MiB/s Average min max 00:30:37.908 Core 2: 12162.53 47.51 1314.63 497.56 2553.83 00:30:37.908 Core 3: 12488.82 48.78 1280.24 456.85 1681.39 00:30:37.908 ========================================================================== 00:30:37.908 Total : 24651.35 96.29 1297.21 456.85 2553.83 00:30:37.908 00:30:37.908 Total operations: 123300, translate 0 pull_push 493200 memzero 0 00:30:37.908 05:47:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:30:37.908 05:47:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:30:37.908 05:47:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:30:37.908 05:47:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:30:37.909 Ignoring -M option 00:30:37.909 [2024-11-27 05:47:34.479696] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:37.909 [2024-11-27 05:47:34.479790] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497121 ] 00:30:38.166 [2024-11-27 05:47:34.632438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:38.166 [2024-11-27 05:47:34.736225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.166 [2024-11-27 05:47:34.736234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.719 bdev 8c260c89-31f1-4300-a154-f5560712bd42 reports 1 memory domains 00:30:44.719 bdev 8c260c89-31f1-4300-a154-f5560712bd42 supports RDMA memory domain 00:30:44.719 Initialization complete, running randread IO for 5 sec on 2 cores 00:30:44.719 ========================================================================== 00:30:44.719 Latency [us] 00:30:44.719 IOPS MiB/s Average min max 00:30:44.719 Core 2: 61902.48 241.81 257.56 85.87 2294.75 00:30:44.719 Core 3: 63072.90 246.38 252.77 77.88 2163.51 00:30:44.719 ========================================================================== 00:30:44.719 Total : 124975.38 488.19 255.14 77.88 2294.75 00:30:44.719 00:30:44.719 Total operations: 624973, translate 0 pull_push 0 memzero 624973 00:30:44.719 05:47:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:30:44.977 [2024-11-27 05:47:41.322796] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:47.500 Initializing NVMe Controllers 00:30:47.500 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:30:47.500 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:47.500 Initialization complete. Launching workers. 00:30:47.500 ======================================================== 00:30:47.500 Latency(us) 00:30:47.500 Device Information : IOPS MiB/s Average min max 00:30:47.500 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2024.66 7.91 7957.13 6007.80 8984.34 00:30:47.500 ======================================================== 00:30:47.500 Total : 2024.66 7.91 7957.13 6007.80 8984.34 00:30:47.500 00:30:47.500 05:47:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:30:47.500 05:47:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:30:47.500 05:47:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:30:47.500 05:47:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:30:47.500 [2024-11-27 05:47:43.796426] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:47.500 [2024-11-27 05:47:43.796516] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3498714 ] 00:30:47.500 [2024-11-27 05:47:43.944932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:47.500 [2024-11-27 05:47:44.049860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:47.500 [2024-11-27 05:47:44.049868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.066 bdev 0f1d8afc-5575-4074-a1a9-1bbd01ea4bd0 reports 1 memory domains 00:30:54.066 bdev 0f1d8afc-5575-4074-a1a9-1bbd01ea4bd0 supports RDMA memory domain 00:30:54.066 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:54.066 ========================================================================== 00:30:54.066 Latency [us] 00:30:54.066 IOPS MiB/s Average min max 00:30:54.066 Core 2: 16632.15 64.97 961.16 17.89 6871.40 00:30:54.066 Core 3: 16873.47 65.91 947.44 9.43 6941.70 00:30:54.066 ========================================================================== 00:30:54.066 Total : 33505.62 130.88 954.25 9.43 6941.70 00:30:54.066 00:30:54.066 Total operations: 167585, translate 167449 pull_push 0 memzero 136 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:54.066 rmmod nvme_rdma 00:30:54.066 rmmod nvme_fabrics 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 3494682 ']' 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 3494682 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 3494682 ']' 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 3494682 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3494682 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3494682' 00:30:54.066 killing process with pid 3494682 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 3494682 00:30:54.066 05:47:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 3494682 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:56.601 00:30:56.601 real 0m41.239s 00:30:56.601 user 1m57.853s 00:30:56.601 sys 0m7.938s 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:56.601 ************************************ 00:30:56.601 END TEST dma 00:30:56.601 ************************************ 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.601 ************************************ 00:30:56.601 START TEST nvmf_identify 00:30:56.601 ************************************ 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:30:56.601 * Looking for test storage... 00:30:56.601 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:56.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.601 --rc genhtml_branch_coverage=1 00:30:56.601 --rc genhtml_function_coverage=1 00:30:56.601 --rc genhtml_legend=1 00:30:56.601 --rc geninfo_all_blocks=1 00:30:56.601 --rc geninfo_unexecuted_blocks=1 00:30:56.601 00:30:56.601 ' 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:56.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.601 --rc genhtml_branch_coverage=1 00:30:56.601 --rc genhtml_function_coverage=1 00:30:56.601 --rc genhtml_legend=1 00:30:56.601 --rc geninfo_all_blocks=1 00:30:56.601 --rc geninfo_unexecuted_blocks=1 00:30:56.601 00:30:56.601 ' 00:30:56.601 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:56.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.602 --rc genhtml_branch_coverage=1 00:30:56.602 --rc genhtml_function_coverage=1 00:30:56.602 --rc genhtml_legend=1 00:30:56.602 --rc geninfo_all_blocks=1 00:30:56.602 --rc geninfo_unexecuted_blocks=1 00:30:56.602 00:30:56.602 ' 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:56.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.602 --rc genhtml_branch_coverage=1 00:30:56.602 --rc genhtml_function_coverage=1 00:30:56.602 --rc genhtml_legend=1 00:30:56.602 --rc geninfo_all_blocks=1 00:30:56.602 --rc geninfo_unexecuted_blocks=1 00:30:56.602 00:30:56.602 ' 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:56.602 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.602 05:47:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:04.716 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.716 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.716 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.716 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.716 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.716 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.716 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:04.717 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:04.717 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:04.717 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:04.717 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:04.717 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:04.976 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:04.976 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:04.976 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.976 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:04.976 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:04.976 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:04.977 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:04.977 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:04.977 altname enp217s0f0np0 00:31:04.977 altname ens818f0np0 00:31:04.977 inet 192.168.100.8/24 scope global mlx_0_0 00:31:04.977 valid_lft forever preferred_lft forever 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:04.977 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:04.977 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:04.977 altname enp217s0f1np1 00:31:04.977 altname ens818f1np1 00:31:04.977 inet 192.168.100.9/24 scope global mlx_0_1 00:31:04.977 valid_lft forever preferred_lft forever 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:04.977 192.168.100.9' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:04.977 192.168.100.9' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:04.977 192.168.100.9' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3504313 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3504313 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3504313 ']' 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:04.977 05:48:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:04.977 [2024-11-27 05:48:01.540174] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:04.977 [2024-11-27 05:48:01.540288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.236 [2024-11-27 05:48:01.695462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:05.236 [2024-11-27 05:48:01.794756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.236 [2024-11-27 05:48:01.794806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.236 [2024-11-27 05:48:01.794818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.236 [2024-11-27 05:48:01.794848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.236 [2024-11-27 05:48:01.794858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.236 [2024-11-27 05:48:01.797276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.236 [2024-11-27 05:48:01.797291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.236 [2024-11-27 05:48:01.797386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.236 [2024-11-27 05:48:01.797394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:05.802 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.802 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:31:05.802 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:05.802 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.802 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:06.061 [2024-11-27 05:48:02.401143] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fd3fefb3940) succeed. 00:31:06.061 [2024-11-27 05:48:02.411097] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fd3fef6f940) succeed. 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:06.321 Malloc0 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:06.321 [2024-11-27 05:48:02.832472] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:06.321 [ 00:31:06.321 { 00:31:06.321 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:06.321 "subtype": "Discovery", 00:31:06.321 "listen_addresses": [ 00:31:06.321 { 00:31:06.321 "trtype": "RDMA", 00:31:06.321 "adrfam": "IPv4", 00:31:06.321 "traddr": "192.168.100.8", 00:31:06.321 "trsvcid": "4420" 00:31:06.321 } 00:31:06.321 ], 00:31:06.321 "allow_any_host": true, 00:31:06.321 "hosts": [] 00:31:06.321 }, 00:31:06.321 { 00:31:06.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:06.321 "subtype": "NVMe", 00:31:06.321 "listen_addresses": [ 00:31:06.321 { 00:31:06.321 "trtype": "RDMA", 00:31:06.321 "adrfam": "IPv4", 00:31:06.321 "traddr": "192.168.100.8", 00:31:06.321 "trsvcid": "4420" 00:31:06.321 } 00:31:06.321 ], 00:31:06.321 "allow_any_host": true, 00:31:06.321 "hosts": [], 00:31:06.321 "serial_number": "SPDK00000000000001", 00:31:06.321 "model_number": "SPDK bdev Controller", 00:31:06.321 "max_namespaces": 32, 00:31:06.321 "min_cntlid": 1, 00:31:06.321 "max_cntlid": 65519, 00:31:06.321 "namespaces": [ 00:31:06.321 { 00:31:06.321 "nsid": 1, 00:31:06.321 "bdev_name": "Malloc0", 00:31:06.321 "name": "Malloc0", 00:31:06.321 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:06.321 "eui64": "ABCDEF0123456789", 00:31:06.321 "uuid": "800a3bf4-45ad-4df8-acd4-2e2ca9627451" 00:31:06.321 } 00:31:06.321 ] 00:31:06.321 } 00:31:06.321 ] 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.321 05:48:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:06.595 [2024-11-27 05:48:02.913618] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:06.595 [2024-11-27 05:48:02.913689] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3504613 ] 00:31:06.595 [2024-11-27 05:48:02.999790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:06.595 [2024-11-27 05:48:02.999896] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:31:06.595 [2024-11-27 05:48:02.999937] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:31:06.595 [2024-11-27 05:48:02.999946] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:31:06.595 [2024-11-27 05:48:02.999993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:06.595 [2024-11-27 05:48:03.011145] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:31:06.596 [2024-11-27 05:48:03.021633] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:06.596 [2024-11-27 05:48:03.021654] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:31:06.596 [2024-11-27 05:48:03.021676] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021687] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021699] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021708] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021719] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021728] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021737] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021745] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021755] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021763] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021773] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021781] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021790] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021800] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021812] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021820] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021830] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021838] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021849] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021857] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021867] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021875] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021885] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021893] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021909] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021917] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021928] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021936] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021946] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021955] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021965] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.021973] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:31:06.596 [2024-11-27 05:48:03.021983] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:06.596 [2024-11-27 05:48:03.021990] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:31:06.596 [2024-11-27 05:48:03.022024] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.022046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x180300 00:31:06.596 [2024-11-27 05:48:03.026629] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.596 [2024-11-27 05:48:03.026654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:06.596 [2024-11-27 05:48:03.026672] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.026685] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:06.596 [2024-11-27 05:48:03.026703] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:06.596 [2024-11-27 05:48:03.026714] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:06.596 [2024-11-27 05:48:03.026751] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.026766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.596 [2024-11-27 05:48:03.026807] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.596 [2024-11-27 05:48:03.026817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:31:06.596 [2024-11-27 05:48:03.026833] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:06.596 [2024-11-27 05:48:03.026845] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.026857] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:06.596 [2024-11-27 05:48:03.026869] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.026885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.596 [2024-11-27 05:48:03.026903] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.596 [2024-11-27 05:48:03.026914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:31:06.596 [2024-11-27 05:48:03.026925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:06.596 [2024-11-27 05:48:03.026936] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.026946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:06.596 [2024-11-27 05:48:03.026963] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.026975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.596 [2024-11-27 05:48:03.026994] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.596 [2024-11-27 05:48:03.027003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:06.596 [2024-11-27 05:48:03.027015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:06.596 [2024-11-27 05:48:03.027023] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.027037] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.027049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.596 [2024-11-27 05:48:03.027072] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.596 [2024-11-27 05:48:03.027081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:06.596 [2024-11-27 05:48:03.027094] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:06.596 [2024-11-27 05:48:03.027103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:06.596 [2024-11-27 05:48:03.027115] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.027124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:06.596 [2024-11-27 05:48:03.027236] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:06.596 [2024-11-27 05:48:03.027248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:06.596 [2024-11-27 05:48:03.027265] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.027281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.596 [2024-11-27 05:48:03.027304] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.596 [2024-11-27 05:48:03.027312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:06.596 [2024-11-27 05:48:03.027324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:06.596 [2024-11-27 05:48:03.027333] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.027347] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.027360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.596 [2024-11-27 05:48:03.027380] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.596 [2024-11-27 05:48:03.027388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:06.596 [2024-11-27 05:48:03.027401] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:06.596 [2024-11-27 05:48:03.027410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:06.596 [2024-11-27 05:48:03.027421] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.027435] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:06.596 [2024-11-27 05:48:03.027449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:06.596 [2024-11-27 05:48:03.027467] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.027484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180300 00:31:06.596 [2024-11-27 05:48:03.027553] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.596 [2024-11-27 05:48:03.027565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:06.596 [2024-11-27 05:48:03.027581] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:06.596 [2024-11-27 05:48:03.027596] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:06.596 [2024-11-27 05:48:03.027605] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:06.596 [2024-11-27 05:48:03.027625] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:06.596 [2024-11-27 05:48:03.027634] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:06.596 [2024-11-27 05:48:03.027650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:06.596 [2024-11-27 05:48:03.027662] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180300 00:31:06.596 [2024-11-27 05:48:03.027679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:06.596 [2024-11-27 05:48:03.027699] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.027716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.597 [2024-11-27 05:48:03.027743] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.597 [2024-11-27 05:48:03.027754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:06.597 [2024-11-27 05:48:03.027772] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.027786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.597 [2024-11-27 05:48:03.027796] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.027808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.597 [2024-11-27 05:48:03.027817] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.027829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.597 [2024-11-27 05:48:03.027838] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.027849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.597 [2024-11-27 05:48:03.027858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:06.597 [2024-11-27 05:48:03.027873] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.027886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:06.597 [2024-11-27 05:48:03.027901] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.027913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.597 [2024-11-27 05:48:03.027935] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.597 [2024-11-27 05:48:03.027944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:31:06.597 [2024-11-27 05:48:03.027955] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:06.597 [2024-11-27 05:48:03.027965] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:06.597 [2024-11-27 05:48:03.027979] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.027995] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.028012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180300 00:31:06.597 [2024-11-27 05:48:03.028048] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.597 [2024-11-27 05:48:03.028059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:06.597 [2024-11-27 05:48:03.028076] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.028097] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:06.597 [2024-11-27 05:48:03.028142] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.028157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x400 key:0x180300 00:31:06.597 [2024-11-27 05:48:03.028167] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.028185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.597 [2024-11-27 05:48:03.028229] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.597 [2024-11-27 05:48:03.028240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:06.597 [2024-11-27 05:48:03.028267] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.028281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180300 00:31:06.597 [2024-11-27 05:48:03.028291] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.028302] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.597 [2024-11-27 05:48:03.028310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:06.597 [2024-11-27 05:48:03.028321] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.028329] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.597 [2024-11-27 05:48:03.028339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:06.597 [2024-11-27 05:48:03.028357] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.028373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180300 00:31:06.597 [2024-11-27 05:48:03.028382] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180300 00:31:06.597 [2024-11-27 05:48:03.028401] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.597 [2024-11-27 05:48:03.028409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:06.597 [2024-11-27 05:48:03.028430] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180300 00:31:06.597 ===================================================== 00:31:06.597 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:06.597 ===================================================== 00:31:06.597 Controller Capabilities/Features 00:31:06.597 ================================ 00:31:06.597 Vendor ID: 0000 00:31:06.597 Subsystem Vendor ID: 0000 00:31:06.597 Serial Number: .................... 00:31:06.597 Model Number: ........................................ 00:31:06.597 Firmware Version: 25.01 00:31:06.597 Recommended Arb Burst: 0 00:31:06.597 IEEE OUI Identifier: 00 00 00 00:31:06.597 Multi-path I/O 00:31:06.597 May have multiple subsystem ports: No 00:31:06.597 May have multiple controllers: No 00:31:06.597 Associated with SR-IOV VF: No 00:31:06.597 Max Data Transfer Size: 131072 00:31:06.597 Max Number of Namespaces: 0 00:31:06.597 Max Number of I/O Queues: 1024 00:31:06.597 NVMe Specification Version (VS): 1.3 00:31:06.597 NVMe Specification Version (Identify): 1.3 00:31:06.597 Maximum Queue Entries: 128 00:31:06.597 Contiguous Queues Required: Yes 00:31:06.597 Arbitration Mechanisms Supported 00:31:06.597 Weighted Round Robin: Not Supported 00:31:06.597 Vendor Specific: Not Supported 00:31:06.597 Reset Timeout: 15000 ms 00:31:06.597 Doorbell Stride: 4 bytes 00:31:06.597 NVM Subsystem Reset: Not Supported 00:31:06.597 Command Sets Supported 00:31:06.597 NVM Command Set: Supported 00:31:06.597 Boot Partition: Not Supported 00:31:06.597 Memory Page Size Minimum: 4096 bytes 00:31:06.597 Memory Page Size Maximum: 4096 bytes 00:31:06.597 Persistent Memory Region: Not Supported 00:31:06.597 Optional Asynchronous Events Supported 00:31:06.597 Namespace Attribute Notices: Not Supported 00:31:06.597 Firmware Activation Notices: Not Supported 00:31:06.597 ANA Change Notices: Not Supported 00:31:06.597 PLE Aggregate Log Change Notices: Not Supported 00:31:06.597 LBA Status Info Alert Notices: Not Supported 00:31:06.597 EGE Aggregate Log Change Notices: Not Supported 00:31:06.597 Normal NVM Subsystem Shutdown event: Not Supported 00:31:06.597 Zone Descriptor Change Notices: Not Supported 00:31:06.597 Discovery Log Change Notices: Supported 00:31:06.597 Controller Attributes 00:31:06.597 128-bit Host Identifier: Not Supported 00:31:06.597 Non-Operational Permissive Mode: Not Supported 00:31:06.597 NVM Sets: Not Supported 00:31:06.597 Read Recovery Levels: Not Supported 00:31:06.597 Endurance Groups: Not Supported 00:31:06.597 Predictable Latency Mode: Not Supported 00:31:06.597 Traffic Based Keep ALive: Not Supported 00:31:06.597 Namespace Granularity: Not Supported 00:31:06.597 SQ Associations: Not Supported 00:31:06.597 UUID List: Not Supported 00:31:06.597 Multi-Domain Subsystem: Not Supported 00:31:06.597 Fixed Capacity Management: Not Supported 00:31:06.597 Variable Capacity Management: Not Supported 00:31:06.597 Delete Endurance Group: Not Supported 00:31:06.597 Delete NVM Set: Not Supported 00:31:06.597 Extended LBA Formats Supported: Not Supported 00:31:06.597 Flexible Data Placement Supported: Not Supported 00:31:06.597 00:31:06.597 Controller Memory Buffer Support 00:31:06.597 ================================ 00:31:06.597 Supported: No 00:31:06.597 00:31:06.597 Persistent Memory Region Support 00:31:06.597 ================================ 00:31:06.597 Supported: No 00:31:06.597 00:31:06.597 Admin Command Set Attributes 00:31:06.597 ============================ 00:31:06.597 Security Send/Receive: Not Supported 00:31:06.597 Format NVM: Not Supported 00:31:06.597 Firmware Activate/Download: Not Supported 00:31:06.597 Namespace Management: Not Supported 00:31:06.597 Device Self-Test: Not Supported 00:31:06.597 Directives: Not Supported 00:31:06.597 NVMe-MI: Not Supported 00:31:06.597 Virtualization Management: Not Supported 00:31:06.597 Doorbell Buffer Config: Not Supported 00:31:06.597 Get LBA Status Capability: Not Supported 00:31:06.597 Command & Feature Lockdown Capability: Not Supported 00:31:06.597 Abort Command Limit: 1 00:31:06.597 Async Event Request Limit: 4 00:31:06.597 Number of Firmware Slots: N/A 00:31:06.597 Firmware Slot 1 Read-Only: N/A 00:31:06.598 Firmware Activation Without Reset: N/A 00:31:06.598 Multiple Update Detection Support: N/A 00:31:06.598 Firmware Update Granularity: No Information Provided 00:31:06.598 Per-Namespace SMART Log: No 00:31:06.598 Asymmetric Namespace Access Log Page: Not Supported 00:31:06.598 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:06.598 Command Effects Log Page: Not Supported 00:31:06.598 Get Log Page Extended Data: Supported 00:31:06.598 Telemetry Log Pages: Not Supported 00:31:06.598 Persistent Event Log Pages: Not Supported 00:31:06.598 Supported Log Pages Log Page: May Support 00:31:06.598 Commands Supported & Effects Log Page: Not Supported 00:31:06.598 Feature Identifiers & Effects Log Page:May Support 00:31:06.598 NVMe-MI Commands & Effects Log Page: May Support 00:31:06.598 Data Area 4 for Telemetry Log: Not Supported 00:31:06.598 Error Log Page Entries Supported: 128 00:31:06.598 Keep Alive: Not Supported 00:31:06.598 00:31:06.598 NVM Command Set Attributes 00:31:06.598 ========================== 00:31:06.598 Submission Queue Entry Size 00:31:06.598 Max: 1 00:31:06.598 Min: 1 00:31:06.598 Completion Queue Entry Size 00:31:06.598 Max: 1 00:31:06.598 Min: 1 00:31:06.598 Number of Namespaces: 0 00:31:06.598 Compare Command: Not Supported 00:31:06.598 Write Uncorrectable Command: Not Supported 00:31:06.598 Dataset Management Command: Not Supported 00:31:06.598 Write Zeroes Command: Not Supported 00:31:06.598 Set Features Save Field: Not Supported 00:31:06.598 Reservations: Not Supported 00:31:06.598 Timestamp: Not Supported 00:31:06.598 Copy: Not Supported 00:31:06.598 Volatile Write Cache: Not Present 00:31:06.598 Atomic Write Unit (Normal): 1 00:31:06.598 Atomic Write Unit (PFail): 1 00:31:06.598 Atomic Compare & Write Unit: 1 00:31:06.598 Fused Compare & Write: Supported 00:31:06.598 Scatter-Gather List 00:31:06.598 SGL Command Set: Supported 00:31:06.598 SGL Keyed: Supported 00:31:06.598 SGL Bit Bucket Descriptor: Not Supported 00:31:06.598 SGL Metadata Pointer: Not Supported 00:31:06.598 Oversized SGL: Not Supported 00:31:06.598 SGL Metadata Address: Not Supported 00:31:06.598 SGL Offset: Supported 00:31:06.598 Transport SGL Data Block: Not Supported 00:31:06.598 Replay Protected Memory Block: Not Supported 00:31:06.598 00:31:06.598 Firmware Slot Information 00:31:06.598 ========================= 00:31:06.598 Active slot: 0 00:31:06.598 00:31:06.598 00:31:06.598 Error Log 00:31:06.598 ========= 00:31:06.598 00:31:06.598 Active Namespaces 00:31:06.598 ================= 00:31:06.598 Discovery Log Page 00:31:06.598 ================== 00:31:06.598 Generation Counter: 2 00:31:06.598 Number of Records: 2 00:31:06.598 Record Format: 0 00:31:06.598 00:31:06.598 Discovery Log Entry 0 00:31:06.598 ---------------------- 00:31:06.598 Transport Type: 1 (RDMA) 00:31:06.598 Address Family: 1 (IPv4) 00:31:06.598 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:06.598 Entry Flags: 00:31:06.598 Duplicate Returned Information: 1 00:31:06.598 Explicit Persistent Connection Support for Discovery: 1 00:31:06.598 Transport Requirements: 00:31:06.598 Secure Channel: Not Required 00:31:06.598 Port ID: 0 (0x0000) 00:31:06.598 Controller ID: 65535 (0xffff) 00:31:06.598 Admin Max SQ Size: 128 00:31:06.598 Transport Service Identifier: 4420 00:31:06.598 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:06.598 Transport Address: 192.168.100.8 00:31:06.598 Transport Specific Address Subtype - RDMA 00:31:06.598 RDMA QP Service Type: 1 (Reliable Connected) 00:31:06.598 RDMA Provider Type: 1 (No provider specified) 00:31:06.598 RDMA CM Service: 1 (RDMA_CM) 00:31:06.598 Discovery Log Entry 1 00:31:06.598 ---------------------- 00:31:06.598 Transport Type: 1 (RDMA) 00:31:06.598 Address Family: 1 (IPv4) 00:31:06.598 Subsystem Type: 2 (NVM Subsystem) 00:31:06.598 Entry Flags: 00:31:06.598 Duplicate Returned Information: 0 00:31:06.598 Explicit Persistent Connection Support for Discovery: 0 00:31:06.598 Transport Requirements: 00:31:06.598 Secure Channel: Not Required 00:31:06.598 Port ID: 0 (0x0000) 00:31:06.598 Controller ID: 65535 (0xffff) 00:31:06.598 Admin Max SQ Size: [2024-11-27 05:48:03.028550] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:06.598 [2024-11-27 05:48:03.028574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.598 [2024-11-27 05:48:03.028585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.598 [2024-11-27 05:48:03.028597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.598 [2024-11-27 05:48:03.028617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.598 [2024-11-27 05:48:03.028635] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.028648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.598 [2024-11-27 05:48:03.028673] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.598 [2024-11-27 05:48:03.028683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:31:06.598 [2024-11-27 05:48:03.028702] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.028714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.598 [2024-11-27 05:48:03.028728] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.028743] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.598 [2024-11-27 05:48:03.028756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:06.598 [2024-11-27 05:48:03.028766] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:06.598 [2024-11-27 05:48:03.028780] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:06.598 [2024-11-27 05:48:03.028790] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.028807] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.028819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.598 [2024-11-27 05:48:03.028856] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.598 [2024-11-27 05:48:03.028865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:31:06.598 [2024-11-27 05:48:03.028876] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.028889] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.028904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.598 [2024-11-27 05:48:03.028926] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.598 [2024-11-27 05:48:03.028937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:31:06.598 [2024-11-27 05:48:03.028947] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.028964] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.028975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.598 [2024-11-27 05:48:03.028999] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.598 [2024-11-27 05:48:03.029007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:31:06.598 [2024-11-27 05:48:03.029022] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.029034] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.029047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.598 [2024-11-27 05:48:03.029061] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.598 [2024-11-27 05:48:03.029071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:31:06.598 [2024-11-27 05:48:03.029081] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.029097] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.598 [2024-11-27 05:48:03.029108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.598 [2024-11-27 05:48:03.029133] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029155] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029167] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029196] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029215] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029234] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029266] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029285] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029297] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029330] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029349] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029364] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029395] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029414] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029426] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029459] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029481] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029499] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029533] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029552] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029563] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029595] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029620] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029635] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029674] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029697] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029709] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029743] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029769] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029782] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029818] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029837] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029849] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029880] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029903] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029917] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.029954] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.029963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.029973] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.029985] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.030000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.030013] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.030024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.030032] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.030048] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.030059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.030082] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.030091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.030103] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.030115] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.030128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.030141] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.030151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.030160] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.030176] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.030186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.030204] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.030215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.030225] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.030237] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.599 [2024-11-27 05:48:03.030250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.599 [2024-11-27 05:48:03.030275] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.599 [2024-11-27 05:48:03.030285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:06.599 [2024-11-27 05:48:03.030294] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.030310] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.030321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.600 [2024-11-27 05:48:03.030346] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.600 [2024-11-27 05:48:03.030355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:31:06.600 [2024-11-27 05:48:03.030365] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.030381] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.030398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.600 [2024-11-27 05:48:03.030414] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.600 [2024-11-27 05:48:03.030424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:31:06.600 [2024-11-27 05:48:03.030433] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.030446] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.030457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.600 [2024-11-27 05:48:03.030483] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.600 [2024-11-27 05:48:03.030492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:31:06.600 [2024-11-27 05:48:03.030502] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.030516] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.030535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.600 [2024-11-27 05:48:03.030548] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.600 [2024-11-27 05:48:03.030559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:31:06.600 [2024-11-27 05:48:03.030567] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.030584] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.030595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.600 [2024-11-27 05:48:03.034624] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.600 [2024-11-27 05:48:03.034642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:31:06.600 [2024-11-27 05:48:03.034655] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.034672] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.034687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.600 [2024-11-27 05:48:03.034718] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.600 [2024-11-27 05:48:03.034729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000d p:0 m:0 dnr:0 00:31:06.600 [2024-11-27 05:48:03.034737] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180300 00:31:06.600 [2024-11-27 05:48:03.034750] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:31:06.600 128 00:31:06.600 Transport Service Identifier: 4420 00:31:06.600 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:06.600 Transport Address: 192.168.100.8 00:31:06.600 Transport Specific Address Subtype - RDMA 00:31:06.600 RDMA QP Service Type: 1 (Reliable Connected) 00:31:06.600 RDMA Provider Type: 1 (No provider specified) 00:31:06.600 RDMA CM Service: 1 (RDMA_CM) 00:31:06.600 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:06.861 [2024-11-27 05:48:03.197518] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:06.861 [2024-11-27 05:48:03.197592] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3504620 ] 00:31:06.861 [2024-11-27 05:48:03.283787] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:06.861 [2024-11-27 05:48:03.283888] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:31:06.861 [2024-11-27 05:48:03.283915] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:31:06.861 [2024-11-27 05:48:03.283924] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:31:06.861 [2024-11-27 05:48:03.283971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:06.861 [2024-11-27 05:48:03.295113] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:31:06.861 [2024-11-27 05:48:03.305626] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:06.861 [2024-11-27 05:48:03.305646] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:31:06.861 [2024-11-27 05:48:03.305670] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305681] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305693] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305701] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305711] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305719] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305731] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305739] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305749] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305759] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305769] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305777] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305788] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305796] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305806] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305814] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305823] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305832] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180300 00:31:06.861 [2024-11-27 05:48:03.305843] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305851] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305860] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305868] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305878] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305886] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305903] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305911] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305920] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305928] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305941] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305949] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305958] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.305966] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:31:06.862 [2024-11-27 05:48:03.305977] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:06.862 [2024-11-27 05:48:03.305983] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:31:06.862 [2024-11-27 05:48:03.306019] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.306039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x180300 00:31:06.862 [2024-11-27 05:48:03.310631] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.862 [2024-11-27 05:48:03.310654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:06.862 [2024-11-27 05:48:03.310671] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.310685] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:06.862 [2024-11-27 05:48:03.310702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:06.862 [2024-11-27 05:48:03.310715] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:06.862 [2024-11-27 05:48:03.310742] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.310756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.862 [2024-11-27 05:48:03.310784] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.862 [2024-11-27 05:48:03.310794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:31:06.862 [2024-11-27 05:48:03.310806] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:06.862 [2024-11-27 05:48:03.310817] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.310830] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:06.862 [2024-11-27 05:48:03.310841] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.310857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.862 [2024-11-27 05:48:03.310875] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.862 [2024-11-27 05:48:03.310885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:31:06.862 [2024-11-27 05:48:03.310895] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:06.862 [2024-11-27 05:48:03.310905] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.310915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:06.862 [2024-11-27 05:48:03.310931] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.310942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.862 [2024-11-27 05:48:03.310962] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.862 [2024-11-27 05:48:03.310971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:06.862 [2024-11-27 05:48:03.310982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:06.862 [2024-11-27 05:48:03.310993] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.311007] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.311018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.862 [2024-11-27 05:48:03.311039] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.862 [2024-11-27 05:48:03.311050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:06.862 [2024-11-27 05:48:03.311062] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:06.862 [2024-11-27 05:48:03.311071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:06.862 [2024-11-27 05:48:03.311084] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.311094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:06.862 [2024-11-27 05:48:03.311208] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:06.862 [2024-11-27 05:48:03.311216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:06.862 [2024-11-27 05:48:03.311232] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.311244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.862 [2024-11-27 05:48:03.311273] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.862 [2024-11-27 05:48:03.311282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:06.862 [2024-11-27 05:48:03.311293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:06.862 [2024-11-27 05:48:03.311302] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.311316] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.311330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.862 [2024-11-27 05:48:03.311346] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.862 [2024-11-27 05:48:03.311354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:06.862 [2024-11-27 05:48:03.311366] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:06.862 [2024-11-27 05:48:03.311375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:06.862 [2024-11-27 05:48:03.311388] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.311398] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:06.862 [2024-11-27 05:48:03.311416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:06.862 [2024-11-27 05:48:03.311435] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.311450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180300 00:31:06.862 [2024-11-27 05:48:03.311529] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.862 [2024-11-27 05:48:03.311541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:06.862 [2024-11-27 05:48:03.311556] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:06.862 [2024-11-27 05:48:03.311570] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:06.862 [2024-11-27 05:48:03.311579] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:06.862 [2024-11-27 05:48:03.311591] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:06.862 [2024-11-27 05:48:03.311600] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:06.862 [2024-11-27 05:48:03.311620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:06.862 [2024-11-27 05:48:03.311629] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.311647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:06.862 [2024-11-27 05:48:03.311660] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.862 [2024-11-27 05:48:03.311676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.862 [2024-11-27 05:48:03.311702] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.862 [2024-11-27 05:48:03.311713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:06.862 [2024-11-27 05:48:03.311728] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.311743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.863 [2024-11-27 05:48:03.311753] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.311765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.863 [2024-11-27 05:48:03.311774] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.311786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.863 [2024-11-27 05:48:03.311795] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.311806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.863 [2024-11-27 05:48:03.311814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.311829] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.311842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.311857] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.311868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.863 [2024-11-27 05:48:03.311896] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.863 [2024-11-27 05:48:03.311904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:31:06.863 [2024-11-27 05:48:03.311915] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:06.863 [2024-11-27 05:48:03.311924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.311935] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.311945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.311959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.311971] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.311988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.863 [2024-11-27 05:48:03.312003] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.863 [2024-11-27 05:48:03.312013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:31:06.863 [2024-11-27 05:48:03.312088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312099] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312134] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180300 00:31:06.863 [2024-11-27 05:48:03.312182] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.863 [2024-11-27 05:48:03.312190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:06.863 [2024-11-27 05:48:03.312225] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:06.863 [2024-11-27 05:48:03.312243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312254] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312283] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180300 00:31:06.863 [2024-11-27 05:48:03.312349] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.863 [2024-11-27 05:48:03.312358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:06.863 [2024-11-27 05:48:03.312381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312391] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312418] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180300 00:31:06.863 [2024-11-27 05:48:03.312468] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.863 [2024-11-27 05:48:03.312478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:06.863 [2024-11-27 05:48:03.312496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312507] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312588] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:06.863 [2024-11-27 05:48:03.312597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:06.863 [2024-11-27 05:48:03.312614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:06.863 [2024-11-27 05:48:03.312647] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.863 [2024-11-27 05:48:03.312672] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.863 [2024-11-27 05:48:03.312701] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.863 [2024-11-27 05:48:03.312717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:06.863 [2024-11-27 05:48:03.312727] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312738] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.863 [2024-11-27 05:48:03.312746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:06.863 [2024-11-27 05:48:03.312756] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312769] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.863 [2024-11-27 05:48:03.312796] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.863 [2024-11-27 05:48:03.312807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:06.863 [2024-11-27 05:48:03.312817] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312831] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.863 [2024-11-27 05:48:03.312870] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.863 [2024-11-27 05:48:03.312879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:06.863 [2024-11-27 05:48:03.312891] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312903] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.863 [2024-11-27 05:48:03.312936] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.863 [2024-11-27 05:48:03.312947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:31:06.863 [2024-11-27 05:48:03.312955] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.312985] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180300 00:31:06.863 [2024-11-27 05:48:03.313000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180300 00:31:06.864 [2024-11-27 05:48:03.313016] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180300 00:31:06.864 [2024-11-27 05:48:03.313027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180300 00:31:06.864 [2024-11-27 05:48:03.313044] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x180300 00:31:06.864 [2024-11-27 05:48:03.313055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c8000 len:0x200 key:0x180300 00:31:06.864 [2024-11-27 05:48:03.313072] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x180300 00:31:06.864 [2024-11-27 05:48:03.313083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c6000 len:0x1000 key:0x180300 00:31:06.864 [2024-11-27 05:48:03.313098] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.864 [2024-11-27 05:48:03.313107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:06.864 [2024-11-27 05:48:03.313134] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180300 00:31:06.864 [2024-11-27 05:48:03.313146] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.864 [2024-11-27 05:48:03.313156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:06.864 [2024-11-27 05:48:03.313171] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180300 00:31:06.864 [2024-11-27 05:48:03.313181] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.864 [2024-11-27 05:48:03.313189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:06.864 [2024-11-27 05:48:03.313205] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180300 00:31:06.864 [2024-11-27 05:48:03.313214] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.864 [2024-11-27 05:48:03.313224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:06.864 [2024-11-27 05:48:03.313241] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180300 00:31:06.864 ===================================================== 00:31:06.864 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.864 ===================================================== 00:31:06.864 Controller Capabilities/Features 00:31:06.864 ================================ 00:31:06.864 Vendor ID: 8086 00:31:06.864 Subsystem Vendor ID: 8086 00:31:06.864 Serial Number: SPDK00000000000001 00:31:06.864 Model Number: SPDK bdev Controller 00:31:06.864 Firmware Version: 25.01 00:31:06.864 Recommended Arb Burst: 6 00:31:06.864 IEEE OUI Identifier: e4 d2 5c 00:31:06.864 Multi-path I/O 00:31:06.864 May have multiple subsystem ports: Yes 00:31:06.864 May have multiple controllers: Yes 00:31:06.864 Associated with SR-IOV VF: No 00:31:06.864 Max Data Transfer Size: 131072 00:31:06.864 Max Number of Namespaces: 32 00:31:06.864 Max Number of I/O Queues: 127 00:31:06.864 NVMe Specification Version (VS): 1.3 00:31:06.864 NVMe Specification Version (Identify): 1.3 00:31:06.864 Maximum Queue Entries: 128 00:31:06.864 Contiguous Queues Required: Yes 00:31:06.864 Arbitration Mechanisms Supported 00:31:06.864 Weighted Round Robin: Not Supported 00:31:06.864 Vendor Specific: Not Supported 00:31:06.864 Reset Timeout: 15000 ms 00:31:06.864 Doorbell Stride: 4 bytes 00:31:06.864 NVM Subsystem Reset: Not Supported 00:31:06.864 Command Sets Supported 00:31:06.864 NVM Command Set: Supported 00:31:06.864 Boot Partition: Not Supported 00:31:06.864 Memory Page Size Minimum: 4096 bytes 00:31:06.864 Memory Page Size Maximum: 4096 bytes 00:31:06.864 Persistent Memory Region: Not Supported 00:31:06.864 Optional Asynchronous Events Supported 00:31:06.864 Namespace Attribute Notices: Supported 00:31:06.864 Firmware Activation Notices: Not Supported 00:31:06.864 ANA Change Notices: Not Supported 00:31:06.864 PLE Aggregate Log Change Notices: Not Supported 00:31:06.864 LBA Status Info Alert Notices: Not Supported 00:31:06.864 EGE Aggregate Log Change Notices: Not Supported 00:31:06.864 Normal NVM Subsystem Shutdown event: Not Supported 00:31:06.864 Zone Descriptor Change Notices: Not Supported 00:31:06.864 Discovery Log Change Notices: Not Supported 00:31:06.864 Controller Attributes 00:31:06.864 128-bit Host Identifier: Supported 00:31:06.864 Non-Operational Permissive Mode: Not Supported 00:31:06.864 NVM Sets: Not Supported 00:31:06.864 Read Recovery Levels: Not Supported 00:31:06.864 Endurance Groups: Not Supported 00:31:06.864 Predictable Latency Mode: Not Supported 00:31:06.864 Traffic Based Keep ALive: Not Supported 00:31:06.864 Namespace Granularity: Not Supported 00:31:06.864 SQ Associations: Not Supported 00:31:06.864 UUID List: Not Supported 00:31:06.864 Multi-Domain Subsystem: Not Supported 00:31:06.864 Fixed Capacity Management: Not Supported 00:31:06.864 Variable Capacity Management: Not Supported 00:31:06.864 Delete Endurance Group: Not Supported 00:31:06.864 Delete NVM Set: Not Supported 00:31:06.864 Extended LBA Formats Supported: Not Supported 00:31:06.864 Flexible Data Placement Supported: Not Supported 00:31:06.864 00:31:06.864 Controller Memory Buffer Support 00:31:06.864 ================================ 00:31:06.864 Supported: No 00:31:06.864 00:31:06.864 Persistent Memory Region Support 00:31:06.864 ================================ 00:31:06.864 Supported: No 00:31:06.864 00:31:06.864 Admin Command Set Attributes 00:31:06.864 ============================ 00:31:06.864 Security Send/Receive: Not Supported 00:31:06.864 Format NVM: Not Supported 00:31:06.864 Firmware Activate/Download: Not Supported 00:31:06.864 Namespace Management: Not Supported 00:31:06.864 Device Self-Test: Not Supported 00:31:06.864 Directives: Not Supported 00:31:06.864 NVMe-MI: Not Supported 00:31:06.864 Virtualization Management: Not Supported 00:31:06.864 Doorbell Buffer Config: Not Supported 00:31:06.864 Get LBA Status Capability: Not Supported 00:31:06.864 Command & Feature Lockdown Capability: Not Supported 00:31:06.864 Abort Command Limit: 4 00:31:06.864 Async Event Request Limit: 4 00:31:06.864 Number of Firmware Slots: N/A 00:31:06.864 Firmware Slot 1 Read-Only: N/A 00:31:06.864 Firmware Activation Without Reset: N/A 00:31:06.864 Multiple Update Detection Support: N/A 00:31:06.864 Firmware Update Granularity: No Information Provided 00:31:06.864 Per-Namespace SMART Log: No 00:31:06.864 Asymmetric Namespace Access Log Page: Not Supported 00:31:06.864 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:06.864 Command Effects Log Page: Supported 00:31:06.864 Get Log Page Extended Data: Supported 00:31:06.864 Telemetry Log Pages: Not Supported 00:31:06.864 Persistent Event Log Pages: Not Supported 00:31:06.864 Supported Log Pages Log Page: May Support 00:31:06.864 Commands Supported & Effects Log Page: Not Supported 00:31:06.864 Feature Identifiers & Effects Log Page:May Support 00:31:06.864 NVMe-MI Commands & Effects Log Page: May Support 00:31:06.864 Data Area 4 for Telemetry Log: Not Supported 00:31:06.864 Error Log Page Entries Supported: 128 00:31:06.864 Keep Alive: Supported 00:31:06.864 Keep Alive Granularity: 10000 ms 00:31:06.864 00:31:06.864 NVM Command Set Attributes 00:31:06.864 ========================== 00:31:06.864 Submission Queue Entry Size 00:31:06.864 Max: 64 00:31:06.864 Min: 64 00:31:06.864 Completion Queue Entry Size 00:31:06.864 Max: 16 00:31:06.864 Min: 16 00:31:06.864 Number of Namespaces: 32 00:31:06.864 Compare Command: Supported 00:31:06.864 Write Uncorrectable Command: Not Supported 00:31:06.864 Dataset Management Command: Supported 00:31:06.864 Write Zeroes Command: Supported 00:31:06.864 Set Features Save Field: Not Supported 00:31:06.864 Reservations: Supported 00:31:06.864 Timestamp: Not Supported 00:31:06.864 Copy: Supported 00:31:06.864 Volatile Write Cache: Present 00:31:06.864 Atomic Write Unit (Normal): 1 00:31:06.864 Atomic Write Unit (PFail): 1 00:31:06.864 Atomic Compare & Write Unit: 1 00:31:06.864 Fused Compare & Write: Supported 00:31:06.864 Scatter-Gather List 00:31:06.864 SGL Command Set: Supported 00:31:06.864 SGL Keyed: Supported 00:31:06.864 SGL Bit Bucket Descriptor: Not Supported 00:31:06.864 SGL Metadata Pointer: Not Supported 00:31:06.864 Oversized SGL: Not Supported 00:31:06.864 SGL Metadata Address: Not Supported 00:31:06.864 SGL Offset: Supported 00:31:06.864 Transport SGL Data Block: Not Supported 00:31:06.864 Replay Protected Memory Block: Not Supported 00:31:06.864 00:31:06.864 Firmware Slot Information 00:31:06.864 ========================= 00:31:06.864 Active slot: 1 00:31:06.864 Slot 1 Firmware Revision: 25.01 00:31:06.864 00:31:06.864 00:31:06.864 Commands Supported and Effects 00:31:06.864 ============================== 00:31:06.864 Admin Commands 00:31:06.864 -------------- 00:31:06.864 Get Log Page (02h): Supported 00:31:06.865 Identify (06h): Supported 00:31:06.865 Abort (08h): Supported 00:31:06.865 Set Features (09h): Supported 00:31:06.865 Get Features (0Ah): Supported 00:31:06.865 Asynchronous Event Request (0Ch): Supported 00:31:06.865 Keep Alive (18h): Supported 00:31:06.865 I/O Commands 00:31:06.865 ------------ 00:31:06.865 Flush (00h): Supported LBA-Change 00:31:06.865 Write (01h): Supported LBA-Change 00:31:06.865 Read (02h): Supported 00:31:06.865 Compare (05h): Supported 00:31:06.865 Write Zeroes (08h): Supported LBA-Change 00:31:06.865 Dataset Management (09h): Supported LBA-Change 00:31:06.865 Copy (19h): Supported LBA-Change 00:31:06.865 00:31:06.865 Error Log 00:31:06.865 ========= 00:31:06.865 00:31:06.865 Arbitration 00:31:06.865 =========== 00:31:06.865 Arbitration Burst: 1 00:31:06.865 00:31:06.865 Power Management 00:31:06.865 ================ 00:31:06.865 Number of Power States: 1 00:31:06.865 Current Power State: Power State #0 00:31:06.865 Power State #0: 00:31:06.865 Max Power: 0.00 W 00:31:06.865 Non-Operational State: Operational 00:31:06.865 Entry Latency: Not Reported 00:31:06.865 Exit Latency: Not Reported 00:31:06.865 Relative Read Throughput: 0 00:31:06.865 Relative Read Latency: 0 00:31:06.865 Relative Write Throughput: 0 00:31:06.865 Relative Write Latency: 0 00:31:06.865 Idle Power: Not Reported 00:31:06.865 Active Power: Not Reported 00:31:06.865 Non-Operational Permissive Mode: Not Supported 00:31:06.865 00:31:06.865 Health Information 00:31:06.865 ================== 00:31:06.865 Critical Warnings: 00:31:06.865 Available Spare Space: OK 00:31:06.865 Temperature: OK 00:31:06.865 Device Reliability: OK 00:31:06.865 Read Only: No 00:31:06.865 Volatile Memory Backup: OK 00:31:06.865 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:06.865 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:06.865 Available Spare: 0% 00:31:06.865 Available Spare Threshold: 0% 00:31:06.865 Life Percentage [2024-11-27 05:48:03.313376] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313389] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.313412] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.313422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313433] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313477] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:06.865 [2024-11-27 05:48:03.313500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313547] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.313587] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.313596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313617] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.313641] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313661] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.313672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313682] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:06.865 [2024-11-27 05:48:03.313694] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:06.865 [2024-11-27 05:48:03.313704] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313719] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.313760] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.313769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313779] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313794] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.313824] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.313834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313842] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313856] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.313889] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.313898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313908] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313920] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.313948] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.313958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.313966] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313982] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.313993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.314014] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.314022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.314032] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.314046] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.314060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.314093] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.314104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.314112] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.314126] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.314137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.314162] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.314171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.314181] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.314193] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.314207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.865 [2024-11-27 05:48:03.314226] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.865 [2024-11-27 05:48:03.314242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:31:06.865 [2024-11-27 05:48:03.314250] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180300 00:31:06.865 [2024-11-27 05:48:03.314266] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.866 [2024-11-27 05:48:03.314301] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.866 [2024-11-27 05:48:03.314310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:31:06.866 [2024-11-27 05:48:03.314320] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314331] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.866 [2024-11-27 05:48:03.314360] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.866 [2024-11-27 05:48:03.314372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:31:06.866 [2024-11-27 05:48:03.314381] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314395] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.866 [2024-11-27 05:48:03.314425] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.866 [2024-11-27 05:48:03.314437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:31:06.866 [2024-11-27 05:48:03.314449] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314460] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.866 [2024-11-27 05:48:03.314494] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.866 [2024-11-27 05:48:03.314504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:31:06.866 [2024-11-27 05:48:03.314513] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314526] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.866 [2024-11-27 05:48:03.314556] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.866 [2024-11-27 05:48:03.314565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:06.866 [2024-11-27 05:48:03.314577] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314590] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.314605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.866 [2024-11-27 05:48:03.318637] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.866 [2024-11-27 05:48:03.318654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:31:06.866 [2024-11-27 05:48:03.318664] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.318681] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.318693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:06.866 [2024-11-27 05:48:03.318724] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:06.866 [2024-11-27 05:48:03.318733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0009 p:0 m:0 dnr:0 00:31:06.866 [2024-11-27 05:48:03.318743] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180300 00:31:06.866 [2024-11-27 05:48:03.318758] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:31:06.866 Used: 0% 00:31:06.866 Data Units Read: 0 00:31:06.866 Data Units Written: 0 00:31:06.866 Host Read Commands: 0 00:31:06.866 Host Write Commands: 0 00:31:06.866 Controller Busy Time: 0 minutes 00:31:06.866 Power Cycles: 0 00:31:06.866 Power On Hours: 0 hours 00:31:06.866 Unsafe Shutdowns: 0 00:31:06.866 Unrecoverable Media Errors: 0 00:31:06.866 Lifetime Error Log Entries: 0 00:31:06.866 Warning Temperature Time: 0 minutes 00:31:06.866 Critical Temperature Time: 0 minutes 00:31:06.866 00:31:06.866 Number of Queues 00:31:06.866 ================ 00:31:06.866 Number of I/O Submission Queues: 127 00:31:06.866 Number of I/O Completion Queues: 127 00:31:06.866 00:31:06.866 Active Namespaces 00:31:06.866 ================= 00:31:06.866 Namespace ID:1 00:31:06.866 Error Recovery Timeout: Unlimited 00:31:06.866 Command Set Identifier: NVM (00h) 00:31:06.866 Deallocate: Supported 00:31:06.866 Deallocated/Unwritten Error: Not Supported 00:31:06.866 Deallocated Read Value: Unknown 00:31:06.866 Deallocate in Write Zeroes: Not Supported 00:31:06.866 Deallocated Guard Field: 0xFFFF 00:31:06.866 Flush: Supported 00:31:06.866 Reservation: Supported 00:31:06.866 Namespace Sharing Capabilities: Multiple Controllers 00:31:06.866 Size (in LBAs): 131072 (0GiB) 00:31:06.866 Capacity (in LBAs): 131072 (0GiB) 00:31:06.866 Utilization (in LBAs): 131072 (0GiB) 00:31:06.866 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:06.866 EUI64: ABCDEF0123456789 00:31:06.866 UUID: 800a3bf4-45ad-4df8-acd4-2e2ca9627451 00:31:06.866 Thin Provisioning: Not Supported 00:31:06.866 Per-NS Atomic Units: Yes 00:31:06.866 Atomic Boundary Size (Normal): 0 00:31:06.866 Atomic Boundary Size (PFail): 0 00:31:06.866 Atomic Boundary Offset: 0 00:31:06.866 Maximum Single Source Range Length: 65535 00:31:06.866 Maximum Copy Length: 65535 00:31:06.866 Maximum Source Range Count: 1 00:31:06.866 NGUID/EUI64 Never Reused: No 00:31:06.866 Namespace Write Protected: No 00:31:06.866 Number of LBA Formats: 1 00:31:06.866 Current LBA Format: LBA Format #00 00:31:06.866 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:06.866 00:31:06.866 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:06.866 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:06.866 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.866 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:06.866 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.866 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:06.866 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:06.866 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.866 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:07.125 rmmod nvme_rdma 00:31:07.125 rmmod nvme_fabrics 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3504313 ']' 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3504313 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3504313 ']' 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3504313 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3504313 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3504313' 00:31:07.125 killing process with pid 3504313 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3504313 00:31:07.125 05:48:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3504313 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:09.026 00:31:09.026 real 0m12.698s 00:31:09.026 user 0m15.067s 00:31:09.026 sys 0m7.296s 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:09.026 ************************************ 00:31:09.026 END TEST nvmf_identify 00:31:09.026 ************************************ 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.026 ************************************ 00:31:09.026 START TEST nvmf_perf 00:31:09.026 ************************************ 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:31:09.026 * Looking for test storage... 00:31:09.026 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:31:09.026 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:09.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.286 --rc genhtml_branch_coverage=1 00:31:09.286 --rc genhtml_function_coverage=1 00:31:09.286 --rc genhtml_legend=1 00:31:09.286 --rc geninfo_all_blocks=1 00:31:09.286 --rc geninfo_unexecuted_blocks=1 00:31:09.286 00:31:09.286 ' 00:31:09.286 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:09.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.286 --rc genhtml_branch_coverage=1 00:31:09.286 --rc genhtml_function_coverage=1 00:31:09.287 --rc genhtml_legend=1 00:31:09.287 --rc geninfo_all_blocks=1 00:31:09.287 --rc geninfo_unexecuted_blocks=1 00:31:09.287 00:31:09.287 ' 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:09.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.287 --rc genhtml_branch_coverage=1 00:31:09.287 --rc genhtml_function_coverage=1 00:31:09.287 --rc genhtml_legend=1 00:31:09.287 --rc geninfo_all_blocks=1 00:31:09.287 --rc geninfo_unexecuted_blocks=1 00:31:09.287 00:31:09.287 ' 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:09.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.287 --rc genhtml_branch_coverage=1 00:31:09.287 --rc genhtml_function_coverage=1 00:31:09.287 --rc genhtml_legend=1 00:31:09.287 --rc geninfo_all_blocks=1 00:31:09.287 --rc geninfo_unexecuted_blocks=1 00:31:09.287 00:31:09.287 ' 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:09.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.287 05:48:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:17.404 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:17.404 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:17.404 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:17.404 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:17.404 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:17.405 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:17.405 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:17.405 altname enp217s0f0np0 00:31:17.405 altname ens818f0np0 00:31:17.405 inet 192.168.100.8/24 scope global mlx_0_0 00:31:17.405 valid_lft forever preferred_lft forever 00:31:17.405 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:17.664 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:17.664 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:17.664 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:17.664 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:17.664 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:17.664 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:17.664 05:48:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:17.664 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:17.664 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:17.664 altname enp217s0f1np1 00:31:17.664 altname ens818f1np1 00:31:17.664 inet 192.168.100.9/24 scope global mlx_0_1 00:31:17.664 valid_lft forever preferred_lft forever 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:17.664 192.168.100.9' 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:17.664 192.168.100.9' 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:17.664 192.168.100.9' 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:17.664 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3509473 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3509473 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3509473 ']' 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.665 05:48:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:17.665 [2024-11-27 05:48:14.212109] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:17.665 [2024-11-27 05:48:14.212203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.923 [2024-11-27 05:48:14.365014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:17.923 [2024-11-27 05:48:14.470538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.923 [2024-11-27 05:48:14.470597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.923 [2024-11-27 05:48:14.470614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.923 [2024-11-27 05:48:14.470627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.923 [2024-11-27 05:48:14.470637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.923 [2024-11-27 05:48:14.473201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.923 [2024-11-27 05:48:14.473275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.923 [2024-11-27 05:48:14.473374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.923 [2024-11-27 05:48:14.473382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:18.490 05:48:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.490 05:48:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:31:18.490 05:48:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:18.490 05:48:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:18.490 05:48:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:18.490 05:48:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.490 05:48:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:18.490 05:48:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:21.773 05:48:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:21.773 05:48:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:22.031 05:48:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:31:22.031 05:48:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:22.289 05:48:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:22.289 05:48:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:31:22.289 05:48:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:22.289 05:48:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:31:22.289 05:48:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:31:22.289 [2024-11-27 05:48:18.841641] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:31:22.289 [2024-11-27 05:48:18.866074] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029ec0/0x7fbf86148940) succeed. 00:31:22.548 [2024-11-27 05:48:18.875838] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a040/0x7fbf86104940) succeed. 00:31:22.548 05:48:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:22.805 05:48:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:22.805 05:48:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.062 05:48:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:23.062 05:48:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:23.320 05:48:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:23.320 [2024-11-27 05:48:19.817270] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:23.320 05:48:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:23.579 05:48:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:31:23.579 05:48:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:31:23.579 05:48:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:23.579 05:48:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:31:25.481 Initializing NVMe Controllers 00:31:25.482 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:31:25.482 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:31:25.482 Initialization complete. Launching workers. 00:31:25.482 ======================================================== 00:31:25.482 Latency(us) 00:31:25.482 Device Information : IOPS MiB/s Average min max 00:31:25.482 PCIE (0000:d8:00.0) NSID 1 from core 0: 92948.01 363.08 343.69 46.26 5255.21 00:31:25.482 ======================================================== 00:31:25.482 Total : 92948.01 363.08 343.69 46.26 5255.21 00:31:25.482 00:31:25.482 05:48:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:28.765 Initializing NVMe Controllers 00:31:28.765 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.765 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:28.765 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:28.765 Initialization complete. Launching workers. 00:31:28.765 ======================================================== 00:31:28.765 Latency(us) 00:31:28.765 Device Information : IOPS MiB/s Average min max 00:31:28.765 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6094.42 23.81 163.04 58.64 5067.09 00:31:28.765 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4729.25 18.47 211.03 83.71 5088.95 00:31:28.765 ======================================================== 00:31:28.765 Total : 10823.67 42.28 184.01 58.64 5088.95 00:31:28.765 00:31:28.765 05:48:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:32.051 Initializing NVMe Controllers 00:31:32.051 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.051 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:32.051 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:32.051 Initialization complete. Launching workers. 00:31:32.051 ======================================================== 00:31:32.051 Latency(us) 00:31:32.051 Device Information : IOPS MiB/s Average min max 00:31:32.051 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16331.98 63.80 1963.02 536.31 5712.63 00:31:32.051 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7963.96 6606.92 9050.67 00:31:32.051 ======================================================== 00:31:32.051 Total : 20363.98 79.55 3151.18 536.31 9050.67 00:31:32.051 00:31:32.309 05:48:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:31:32.309 05:48:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:37.575 Initializing NVMe Controllers 00:31:37.575 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.575 Controller IO queue size 128, less than required. 00:31:37.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.575 Controller IO queue size 128, less than required. 00:31:37.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.575 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.575 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:37.575 Initialization complete. Launching workers. 00:31:37.575 ======================================================== 00:31:37.575 Latency(us) 00:31:37.575 Device Information : IOPS MiB/s Average min max 00:31:37.575 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3303.00 825.75 39551.03 14770.40 410102.60 00:31:37.575 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3495.00 873.75 37635.70 15360.08 405363.87 00:31:37.575 ======================================================== 00:31:37.575 Total : 6798.00 1699.50 38566.32 14770.40 410102.60 00:31:37.575 00:31:37.575 05:48:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:31:37.575 No valid NVMe controllers or AIO or URING devices found 00:31:37.575 Initializing NVMe Controllers 00:31:37.575 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.575 Controller IO queue size 128, less than required. 00:31:37.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.575 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:37.575 Controller IO queue size 128, less than required. 00:31:37.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.575 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:37.575 WARNING: Some requested NVMe devices were skipped 00:31:37.832 05:48:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:31:43.095 Initializing NVMe Controllers 00:31:43.095 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:43.095 Controller IO queue size 128, less than required. 00:31:43.095 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:43.095 Controller IO queue size 128, less than required. 00:31:43.095 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:43.095 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:43.095 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:43.095 Initialization complete. Launching workers. 00:31:43.095 00:31:43.095 ==================== 00:31:43.096 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:43.096 RDMA transport: 00:31:43.096 dev name: mlx5_0 00:31:43.096 polls: 319756 00:31:43.096 idle_polls: 317437 00:31:43.096 completions: 36254 00:31:43.096 queued_requests: 1 00:31:43.096 total_send_wrs: 18127 00:31:43.096 send_doorbell_updates: 2115 00:31:43.096 total_recv_wrs: 18254 00:31:43.096 recv_doorbell_updates: 2117 00:31:43.096 --------------------------------- 00:31:43.096 00:31:43.096 ==================== 00:31:43.096 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:43.096 RDMA transport: 00:31:43.096 dev name: mlx5_0 00:31:43.096 polls: 315454 00:31:43.096 idle_polls: 315215 00:31:43.096 completions: 17118 00:31:43.096 queued_requests: 1 00:31:43.096 total_send_wrs: 8559 00:31:43.096 send_doorbell_updates: 233 00:31:43.096 total_recv_wrs: 8686 00:31:43.096 recv_doorbell_updates: 234 00:31:43.096 --------------------------------- 00:31:43.096 ======================================================== 00:31:43.096 Latency(us) 00:31:43.096 Device Information : IOPS MiB/s Average min max 00:31:43.096 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4531.50 1132.87 28604.89 13840.85 392775.15 00:31:43.096 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2139.50 534.87 61135.69 32088.83 406045.69 00:31:43.096 ======================================================== 00:31:43.096 Total : 6671.00 1667.75 39038.05 13840.85 406045.69 00:31:43.096 00:31:43.096 05:48:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:43.096 05:48:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:43.096 05:48:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:43.096 05:48:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:31:43.096 05:48:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:49.676 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=396bb5e9-1689-4b48-965f-fdb8406cf473 00:31:49.676 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 396bb5e9-1689-4b48-965f-fdb8406cf473 00:31:49.676 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=396bb5e9-1689-4b48-965f-fdb8406cf473 00:31:49.676 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:49.676 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:49.676 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:49.676 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:49.952 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:49.952 { 00:31:49.952 "uuid": "396bb5e9-1689-4b48-965f-fdb8406cf473", 00:31:49.952 "name": "lvs_0", 00:31:49.952 "base_bdev": "Nvme0n1", 00:31:49.952 "total_data_clusters": 476466, 00:31:49.952 "free_clusters": 476466, 00:31:49.952 "block_size": 512, 00:31:49.952 "cluster_size": 4194304 00:31:49.952 } 00:31:49.952 ]' 00:31:49.952 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="396bb5e9-1689-4b48-965f-fdb8406cf473") .free_clusters' 00:31:49.952 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=476466 00:31:49.952 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="396bb5e9-1689-4b48-965f-fdb8406cf473") .cluster_size' 00:31:49.952 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:49.952 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1905864 00:31:49.952 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1905864 00:31:49.952 1905864 00:31:49.952 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:31:49.952 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:49.952 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 396bb5e9-1689-4b48-965f-fdb8406cf473 lbd_0 20480 00:31:50.235 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=49d1caac-90f3-4ee4-892e-7f0c826614b3 00:31:50.235 05:48:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 49d1caac-90f3-4ee4-892e-7f0c826614b3 lvs_n_0 00:31:51.645 05:48:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=784a9780-79cc-4e56-9063-3eefd409c81b 00:31:51.645 05:48:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 784a9780-79cc-4e56-9063-3eefd409c81b 00:31:51.645 05:48:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=784a9780-79cc-4e56-9063-3eefd409c81b 00:31:51.645 05:48:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:51.645 05:48:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:51.646 05:48:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:51.646 05:48:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:51.646 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:51.646 { 00:31:51.646 "uuid": "396bb5e9-1689-4b48-965f-fdb8406cf473", 00:31:51.646 "name": "lvs_0", 00:31:51.646 "base_bdev": "Nvme0n1", 00:31:51.646 "total_data_clusters": 476466, 00:31:51.646 "free_clusters": 471346, 00:31:51.646 "block_size": 512, 00:31:51.646 "cluster_size": 4194304 00:31:51.646 }, 00:31:51.646 { 00:31:51.646 "uuid": "784a9780-79cc-4e56-9063-3eefd409c81b", 00:31:51.646 "name": "lvs_n_0", 00:31:51.646 "base_bdev": "49d1caac-90f3-4ee4-892e-7f0c826614b3", 00:31:51.646 "total_data_clusters": 5114, 00:31:51.646 "free_clusters": 5114, 00:31:51.646 "block_size": 512, 00:31:51.646 "cluster_size": 4194304 00:31:51.646 } 00:31:51.646 ]' 00:31:51.646 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="784a9780-79cc-4e56-9063-3eefd409c81b") .free_clusters' 00:31:51.646 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:51.646 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="784a9780-79cc-4e56-9063-3eefd409c81b") .cluster_size' 00:31:51.646 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:51.646 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:51.646 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:51.646 20456 00:31:51.646 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:51.646 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 784a9780-79cc-4e56-9063-3eefd409c81b lbd_nest_0 20456 00:31:51.903 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a165aa38-0207-47af-9188-71c74105bfcc 00:31:51.903 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:52.161 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:52.161 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a165aa38-0207-47af-9188-71c74105bfcc 00:31:52.161 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:52.420 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:52.420 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:52.420 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:52.420 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:52.420 05:48:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:04.616 Initializing NVMe Controllers 00:32:04.616 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:04.616 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:04.616 Initialization complete. Launching workers. 00:32:04.616 ======================================================== 00:32:04.616 Latency(us) 00:32:04.616 Device Information : IOPS MiB/s Average min max 00:32:04.616 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5169.10 2.52 192.96 78.42 8078.92 00:32:04.616 ======================================================== 00:32:04.616 Total : 5169.10 2.52 192.96 78.42 8078.92 00:32:04.616 00:32:04.616 05:49:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:04.616 05:49:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:16.813 Initializing NVMe Controllers 00:32:16.813 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:16.813 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:16.813 Initialization complete. Launching workers. 00:32:16.813 ======================================================== 00:32:16.813 Latency(us) 00:32:16.814 Device Information : IOPS MiB/s Average min max 00:32:16.814 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2482.96 310.37 401.84 174.05 6099.04 00:32:16.814 ======================================================== 00:32:16.814 Total : 2482.96 310.37 401.84 174.05 6099.04 00:32:16.814 00:32:16.814 05:49:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:16.814 05:49:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:16.814 05:49:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:29.010 Initializing NVMe Controllers 00:32:29.010 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:29.010 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:29.010 Initialization complete. Launching workers. 00:32:29.010 ======================================================== 00:32:29.010 Latency(us) 00:32:29.010 Device Information : IOPS MiB/s Average min max 00:32:29.010 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10254.50 5.01 3119.77 1120.19 8863.38 00:32:29.010 ======================================================== 00:32:29.010 Total : 10254.50 5.01 3119.77 1120.19 8863.38 00:32:29.010 00:32:29.011 05:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:29.011 05:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:38.976 Initializing NVMe Controllers 00:32:38.976 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:38.976 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:38.976 Initialization complete. Launching workers. 00:32:38.976 ======================================================== 00:32:38.976 Latency(us) 00:32:38.976 Device Information : IOPS MiB/s Average min max 00:32:38.976 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3990.40 498.80 8024.42 4876.55 25810.84 00:32:38.976 ======================================================== 00:32:38.976 Total : 3990.40 498.80 8024.42 4876.55 25810.84 00:32:38.976 00:32:38.976 05:49:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:38.976 05:49:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:38.977 05:49:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:51.173 Initializing NVMe Controllers 00:32:51.173 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:51.173 Controller IO queue size 128, less than required. 00:32:51.173 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:51.173 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:51.173 Initialization complete. Launching workers. 00:32:51.173 ======================================================== 00:32:51.173 Latency(us) 00:32:51.173 Device Information : IOPS MiB/s Average min max 00:32:51.173 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16534.50 8.07 7744.06 2046.32 15703.97 00:32:51.173 ======================================================== 00:32:51.173 Total : 16534.50 8.07 7744.06 2046.32 15703.97 00:32:51.173 00:32:51.173 05:49:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:51.173 05:49:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:03.370 Initializing NVMe Controllers 00:33:03.370 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:03.370 Controller IO queue size 128, less than required. 00:33:03.370 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:03.370 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:03.370 Initialization complete. Launching workers. 00:33:03.370 ======================================================== 00:33:03.370 Latency(us) 00:33:03.370 Device Information : IOPS MiB/s Average min max 00:33:03.370 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9774.33 1221.79 13096.69 3887.62 92924.77 00:33:03.370 ======================================================== 00:33:03.370 Total : 9774.33 1221.79 13096.69 3887.62 92924.77 00:33:03.370 00:33:03.370 05:49:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:03.370 05:49:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a165aa38-0207-47af-9188-71c74105bfcc 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 49d1caac-90f3-4ee4-892e-7f0c826614b3 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:03.370 rmmod nvme_rdma 00:33:03.370 rmmod nvme_fabrics 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3509473 ']' 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3509473 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3509473 ']' 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3509473 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3509473 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3509473' 00:33:03.370 killing process with pid 3509473 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3509473 00:33:03.370 05:49:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3509473 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:07.567 00:33:07.567 real 1m58.033s 00:33:07.567 user 7m18.487s 00:33:07.567 sys 0m9.768s 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:07.567 ************************************ 00:33:07.567 END TEST nvmf_perf 00:33:07.567 ************************************ 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.567 ************************************ 00:33:07.567 START TEST nvmf_fio_host 00:33:07.567 ************************************ 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:33:07.567 * Looking for test storage... 00:33:07.567 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:07.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.567 --rc genhtml_branch_coverage=1 00:33:07.567 --rc genhtml_function_coverage=1 00:33:07.567 --rc genhtml_legend=1 00:33:07.567 --rc geninfo_all_blocks=1 00:33:07.567 --rc geninfo_unexecuted_blocks=1 00:33:07.567 00:33:07.567 ' 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:07.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.567 --rc genhtml_branch_coverage=1 00:33:07.567 --rc genhtml_function_coverage=1 00:33:07.567 --rc genhtml_legend=1 00:33:07.567 --rc geninfo_all_blocks=1 00:33:07.567 --rc geninfo_unexecuted_blocks=1 00:33:07.567 00:33:07.567 ' 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:07.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.567 --rc genhtml_branch_coverage=1 00:33:07.567 --rc genhtml_function_coverage=1 00:33:07.567 --rc genhtml_legend=1 00:33:07.567 --rc geninfo_all_blocks=1 00:33:07.567 --rc geninfo_unexecuted_blocks=1 00:33:07.567 00:33:07.567 ' 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:07.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.567 --rc genhtml_branch_coverage=1 00:33:07.567 --rc genhtml_function_coverage=1 00:33:07.567 --rc genhtml_legend=1 00:33:07.567 --rc geninfo_all_blocks=1 00:33:07.567 --rc geninfo_unexecuted_blocks=1 00:33:07.567 00:33:07.567 ' 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:07.567 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:07.568 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:07.568 05:50:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:15.680 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.680 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:15.680 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:15.681 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:15.681 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:15.681 05:50:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:15.681 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:15.681 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:15.681 altname enp217s0f0np0 00:33:15.681 altname ens818f0np0 00:33:15.681 inet 192.168.100.8/24 scope global mlx_0_0 00:33:15.681 valid_lft forever preferred_lft forever 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:15.681 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:15.681 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:15.681 altname enp217s0f1np1 00:33:15.681 altname ens818f1np1 00:33:15.681 inet 192.168.100.9/24 scope global mlx_0_1 00:33:15.681 valid_lft forever preferred_lft forever 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:15.681 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:15.682 192.168.100.9' 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:15.682 192.168.100.9' 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:15.682 192.168.100.9' 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3531311 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3531311 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3531311 ']' 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.682 05:50:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.941 [2024-11-27 05:50:12.312165] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:33:15.941 [2024-11-27 05:50:12.312277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:15.941 [2024-11-27 05:50:12.470371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:16.200 [2024-11-27 05:50:12.572504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.200 [2024-11-27 05:50:12.572555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.200 [2024-11-27 05:50:12.572568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.200 [2024-11-27 05:50:12.572597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.200 [2024-11-27 05:50:12.572612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.200 [2024-11-27 05:50:12.575175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.200 [2024-11-27 05:50:12.575249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.200 [2024-11-27 05:50:12.575368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.200 [2024-11-27 05:50:12.575375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.768 05:50:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.768 05:50:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:33:16.768 05:50:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:16.768 [2024-11-27 05:50:13.294647] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f4e1d931940) succeed. 00:33:16.768 [2024-11-27 05:50:13.304664] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f4e1cfbd940) succeed. 00:33:17.027 05:50:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:17.027 05:50:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:17.027 05:50:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.285 05:50:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:17.285 Malloc1 00:33:17.544 05:50:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:17.544 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:17.802 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:18.061 [2024-11-27 05:50:14.436368] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:18.061 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:18.356 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:18.356 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:18.356 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:18.356 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:18.356 05:50:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:18.620 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:18.620 fio-3.35 00:33:18.620 Starting 1 thread 00:33:21.149 00:33:21.149 test: (groupid=0, jobs=1): err= 0: pid=3531943: Wed Nov 27 05:50:17 2024 00:33:21.149 read: IOPS=15.5k, BW=60.6MiB/s (63.6MB/s)(122MiB/2004msec) 00:33:21.149 slat (nsec): min=1492, max=52609, avg=1693.38, stdev=692.94 00:33:21.149 clat (usec): min=3362, max=7453, avg=4104.39, stdev=119.59 00:33:21.149 lat (usec): min=3367, max=7455, avg=4106.09, stdev=119.61 00:33:21.149 clat percentiles (usec): 00:33:21.149 | 1.00th=[ 3687], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4080], 00:33:21.149 | 30.00th=[ 4080], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4113], 00:33:21.149 | 70.00th=[ 4113], 80.00th=[ 4113], 90.00th=[ 4146], 95.00th=[ 4146], 00:33:21.149 | 99.00th=[ 4490], 99.50th=[ 4490], 99.90th=[ 5407], 99.95th=[ 6915], 00:33:21.149 | 99.99th=[ 7439] 00:33:21.149 bw ( KiB/s): min=60944, max=63104, per=99.98%, avg=62078.00, stdev=990.80, samples=4 00:33:21.149 iops : min=15236, max=15776, avg=15519.50, stdev=247.70, samples=4 00:33:21.149 write: IOPS=15.5k, BW=60.6MiB/s (63.6MB/s)(122MiB/2004msec); 0 zone resets 00:33:21.149 slat (nsec): min=1536, max=22311, avg=1774.80, stdev=652.79 00:33:21.149 clat (usec): min=3356, max=7439, avg=4102.17, stdev=115.39 00:33:21.149 lat (usec): min=3361, max=7441, avg=4103.94, stdev=115.42 00:33:21.149 clat percentiles (usec): 00:33:21.149 | 1.00th=[ 3687], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4080], 00:33:21.149 | 30.00th=[ 4080], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4113], 00:33:21.149 | 70.00th=[ 4113], 80.00th=[ 4113], 90.00th=[ 4146], 95.00th=[ 4146], 00:33:21.149 | 99.00th=[ 4490], 99.50th=[ 4490], 99.90th=[ 5866], 99.95th=[ 6456], 00:33:21.149 | 99.99th=[ 7373] 00:33:21.149 bw ( KiB/s): min=61256, max=63032, per=100.00%, avg=62106.00, stdev=741.09, samples=4 00:33:21.149 iops : min=15314, max=15758, avg=15526.50, stdev=185.27, samples=4 00:33:21.149 lat (msec) : 4=1.71%, 10=98.29% 00:33:21.149 cpu : usr=99.15%, sys=0.50%, ctx=16, majf=0, minf=1286 00:33:21.149 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:21.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:21.149 issued rwts: total=31107,31112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:21.149 00:33:21.149 Run status group 0 (all jobs): 00:33:21.149 READ: bw=60.6MiB/s (63.6MB/s), 60.6MiB/s-60.6MiB/s (63.6MB/s-63.6MB/s), io=122MiB (127MB), run=2004-2004msec 00:33:21.149 WRITE: bw=60.6MiB/s (63.6MB/s), 60.6MiB/s-60.6MiB/s (63.6MB/s-63.6MB/s), io=122MiB (127MB), run=2004-2004msec 00:33:21.149 ----------------------------------------------------- 00:33:21.149 Suppressions used: 00:33:21.149 count bytes template 00:33:21.149 1 63 /usr/src/fio/parse.c 00:33:21.149 1 8 libtcmalloc_minimal.so 00:33:21.149 ----------------------------------------------------- 00:33:21.149 00:33:21.149 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:21.149 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:21.149 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:21.149 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:21.149 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:21.149 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.149 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:21.149 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:21.149 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.425 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.425 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:21.425 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:21.425 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:21.425 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:21.425 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:21.425 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:21.425 05:50:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:21.688 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:21.688 fio-3.35 00:33:21.688 Starting 1 thread 00:33:24.229 00:33:24.229 test: (groupid=0, jobs=1): err= 0: pid=3532568: Wed Nov 27 05:50:20 2024 00:33:24.229 read: IOPS=12.3k, BW=192MiB/s (202MB/s)(380MiB/1976msec) 00:33:24.229 slat (nsec): min=2494, max=49597, avg=2945.93, stdev=1241.61 00:33:24.229 clat (usec): min=608, max=9119, avg=1960.13, stdev=1626.53 00:33:24.229 lat (usec): min=611, max=9124, avg=1963.07, stdev=1626.98 00:33:24.229 clat percentiles (usec): 00:33:24.229 | 1.00th=[ 791], 5.00th=[ 906], 10.00th=[ 971], 20.00th=[ 1074], 00:33:24.229 | 30.00th=[ 1156], 40.00th=[ 1237], 50.00th=[ 1352], 60.00th=[ 1483], 00:33:24.229 | 70.00th=[ 1631], 80.00th=[ 1844], 90.00th=[ 5604], 95.00th=[ 5866], 00:33:24.229 | 99.00th=[ 7570], 99.50th=[ 8160], 99.90th=[ 8717], 99.95th=[ 8717], 00:33:24.229 | 99.99th=[ 8979] 00:33:24.229 bw ( KiB/s): min=92423, max=98272, per=48.80%, avg=96041.75, stdev=2583.57, samples=4 00:33:24.229 iops : min= 5776, max= 6142, avg=6002.50, stdev=161.68, samples=4 00:33:24.229 write: IOPS=7016, BW=110MiB/s (115MB/s)(195MiB/1781msec); 0 zone resets 00:33:24.229 slat (usec): min=26, max=122, avg=29.12, stdev= 4.15 00:33:24.229 clat (usec): min=4960, max=22970, avg=14831.36, stdev=2206.19 00:33:24.229 lat (usec): min=4986, max=23000, avg=14860.48, stdev=2205.96 00:33:24.229 clat percentiles (usec): 00:33:24.229 | 1.00th=[ 8455], 5.00th=[11731], 10.00th=[12518], 20.00th=[13173], 00:33:24.229 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14746], 60.00th=[15139], 00:33:24.229 | 70.00th=[15795], 80.00th=[16450], 90.00th=[17433], 95.00th=[18482], 00:33:24.229 | 99.00th=[21103], 99.50th=[21627], 99.90th=[22414], 99.95th=[22676], 00:33:24.229 | 99.99th=[22938] 00:33:24.229 bw ( KiB/s): min=93636, max=102240, per=88.17%, avg=98985.00, stdev=3718.11, samples=4 00:33:24.229 iops : min= 5852, max= 6390, avg=6186.50, stdev=232.50, samples=4 00:33:24.229 lat (usec) : 750=0.26%, 1000=7.84% 00:33:24.229 lat (msec) : 2=46.85%, 4=2.32%, 10=9.35%, 20=32.66%, 50=0.72% 00:33:24.229 cpu : usr=96.26%, sys=2.04%, ctx=186, majf=0, minf=10202 00:33:24.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:24.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:24.229 issued rwts: total=24307,12497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:24.229 00:33:24.229 Run status group 0 (all jobs): 00:33:24.229 READ: bw=192MiB/s (202MB/s), 192MiB/s-192MiB/s (202MB/s-202MB/s), io=380MiB (398MB), run=1976-1976msec 00:33:24.229 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=195MiB (205MB), run=1781-1781msec 00:33:24.489 ----------------------------------------------------- 00:33:24.489 Suppressions used: 00:33:24.489 count bytes template 00:33:24.489 1 63 /usr/src/fio/parse.c 00:33:24.489 210 20160 /usr/src/fio/iolog.c 00:33:24.489 1 8 libtcmalloc_minimal.so 00:33:24.489 ----------------------------------------------------- 00:33:24.489 00:33:24.489 05:50:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.489 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:24.489 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:24.489 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:24.489 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:24.489 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:33:24.489 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:24.489 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:24.489 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:24.749 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:24.749 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:33:24.749 05:50:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:33:28.040 Nvme0n1 00:33:28.040 05:50:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:33.320 05:50:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=f0ac78af-60aa-4a1e-b334-96ec66677a20 00:33:33.320 05:50:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb f0ac78af-60aa-4a1e-b334-96ec66677a20 00:33:33.320 05:50:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=f0ac78af-60aa-4a1e-b334-96ec66677a20 00:33:33.320 05:50:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:33.320 05:50:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:33.320 05:50:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:33.320 05:50:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:33.580 05:50:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:33.580 { 00:33:33.580 "uuid": "f0ac78af-60aa-4a1e-b334-96ec66677a20", 00:33:33.580 "name": "lvs_0", 00:33:33.580 "base_bdev": "Nvme0n1", 00:33:33.580 "total_data_clusters": 1862, 00:33:33.580 "free_clusters": 1862, 00:33:33.580 "block_size": 512, 00:33:33.580 "cluster_size": 1073741824 00:33:33.580 } 00:33:33.580 ]' 00:33:33.580 05:50:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="f0ac78af-60aa-4a1e-b334-96ec66677a20") .free_clusters' 00:33:33.580 05:50:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1862 00:33:33.580 05:50:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="f0ac78af-60aa-4a1e-b334-96ec66677a20") .cluster_size' 00:33:33.580 05:50:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:33:33.580 05:50:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1906688 00:33:33.580 05:50:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1906688 00:33:33.580 1906688 00:33:33.580 05:50:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:33:34.148 9ea93f25-cf09-4228-9c3e-851969af5b16 00:33:34.148 05:50:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:34.406 05:50:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:34.666 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:34.950 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:34.950 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:34.950 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:34.950 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:34.950 05:50:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:35.217 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:35.217 fio-3.35 00:33:35.217 Starting 1 thread 00:33:37.752 00:33:37.752 test: (groupid=0, jobs=1): err= 0: pid=3535042: Wed Nov 27 05:50:34 2024 00:33:37.752 read: IOPS=8806, BW=34.4MiB/s (36.1MB/s)(69.0MiB/2006msec) 00:33:37.752 slat (nsec): min=1493, max=31815, avg=1683.52, stdev=409.30 00:33:37.752 clat (usec): min=195, max=332905, avg=7201.61, stdev=19715.44 00:33:37.752 lat (usec): min=197, max=332909, avg=7203.30, stdev=19715.49 00:33:37.752 clat percentiles (msec): 00:33:37.752 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:33:37.752 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:33:37.752 | 70.00th=[ 6], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:33:37.752 | 99.00th=[ 7], 99.50th=[ 9], 99.90th=[ 334], 99.95th=[ 334], 00:33:37.752 | 99.99th=[ 334] 00:33:37.752 bw ( KiB/s): min=13144, max=42808, per=99.96%, avg=35210.00, stdev=14712.22, samples=4 00:33:37.752 iops : min= 3286, max=10702, avg=8802.50, stdev=3678.05, samples=4 00:33:37.752 write: IOPS=8816, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2006msec); 0 zone resets 00:33:37.752 slat (nsec): min=1523, max=17915, avg=1774.27, stdev=343.22 00:33:37.752 clat (usec): min=173, max=333259, avg=7170.62, stdev=19174.51 00:33:37.752 lat (usec): min=174, max=333265, avg=7172.40, stdev=19174.59 00:33:37.752 clat percentiles (msec): 00:33:37.752 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:33:37.752 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 7], 00:33:37.752 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:33:37.752 | 99.00th=[ 7], 99.50th=[ 9], 99.90th=[ 334], 99.95th=[ 334], 00:33:37.752 | 99.99th=[ 334] 00:33:37.752 bw ( KiB/s): min=13560, max=42680, per=99.92%, avg=35238.00, stdev=14453.26, samples=4 00:33:37.752 iops : min= 3390, max=10670, avg=8809.50, stdev=3613.31, samples=4 00:33:37.752 lat (usec) : 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.01% 00:33:37.752 lat (msec) : 2=0.04%, 4=0.25%, 10=99.30%, 20=0.02%, 500=0.36% 00:33:37.752 cpu : usr=99.20%, sys=0.35%, ctx=15, majf=0, minf=1685 00:33:37.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:37.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:37.752 issued rwts: total=17665,17686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:37.752 00:33:37.752 Run status group 0 (all jobs): 00:33:37.752 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.0MiB (72.4MB), run=2006-2006msec 00:33:37.752 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.4MB), run=2006-2006msec 00:33:38.011 ----------------------------------------------------- 00:33:38.011 Suppressions used: 00:33:38.011 count bytes template 00:33:38.011 1 64 /usr/src/fio/parse.c 00:33:38.011 1 8 libtcmalloc_minimal.so 00:33:38.011 ----------------------------------------------------- 00:33:38.011 00:33:38.011 05:50:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:38.270 05:50:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:39.649 05:50:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=8d9cde12-76ac-49e4-bbf7-03fac427f9f4 00:33:39.649 05:50:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 8d9cde12-76ac-49e4-bbf7-03fac427f9f4 00:33:39.649 05:50:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=8d9cde12-76ac-49e4-bbf7-03fac427f9f4 00:33:39.649 05:50:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:39.649 05:50:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:39.649 05:50:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:39.649 05:50:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:39.649 05:50:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:39.649 { 00:33:39.649 "uuid": "f0ac78af-60aa-4a1e-b334-96ec66677a20", 00:33:39.649 "name": "lvs_0", 00:33:39.649 "base_bdev": "Nvme0n1", 00:33:39.649 "total_data_clusters": 1862, 00:33:39.649 "free_clusters": 0, 00:33:39.649 "block_size": 512, 00:33:39.649 "cluster_size": 1073741824 00:33:39.649 }, 00:33:39.649 { 00:33:39.649 "uuid": "8d9cde12-76ac-49e4-bbf7-03fac427f9f4", 00:33:39.649 "name": "lvs_n_0", 00:33:39.649 "base_bdev": "9ea93f25-cf09-4228-9c3e-851969af5b16", 00:33:39.649 "total_data_clusters": 476206, 00:33:39.649 "free_clusters": 476206, 00:33:39.649 "block_size": 512, 00:33:39.649 "cluster_size": 4194304 00:33:39.649 } 00:33:39.649 ]' 00:33:39.649 05:50:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8d9cde12-76ac-49e4-bbf7-03fac427f9f4") .free_clusters' 00:33:39.649 05:50:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=476206 00:33:39.649 05:50:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8d9cde12-76ac-49e4-bbf7-03fac427f9f4") .cluster_size' 00:33:39.649 05:50:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:33:39.649 05:50:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1904824 00:33:39.649 05:50:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1904824 00:33:39.649 1904824 00:33:39.649 05:50:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:33:42.189 cd9d0a97-ba78-4f77-9f0d-313db5b5adb8 00:33:42.189 05:50:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:42.449 05:50:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:42.449 05:50:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:42.708 05:50:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:43.274 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:43.274 fio-3.35 00:33:43.274 Starting 1 thread 00:33:45.860 00:33:45.860 test: (groupid=0, jobs=1): err= 0: pid=3536439: Wed Nov 27 05:50:42 2024 00:33:45.860 read: IOPS=8937, BW=34.9MiB/s (36.6MB/s)(70.0MiB/2006msec) 00:33:45.860 slat (nsec): min=1492, max=24693, avg=1655.95, stdev=315.03 00:33:45.860 clat (usec): min=3955, max=12512, avg=7066.93, stdev=292.62 00:33:45.860 lat (usec): min=3959, max=12514, avg=7068.59, stdev=292.58 00:33:45.860 clat percentiles (usec): 00:33:45.860 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 6980], 20.00th=[ 6980], 00:33:45.860 | 30.00th=[ 7046], 40.00th=[ 7046], 50.00th=[ 7046], 60.00th=[ 7046], 00:33:45.860 | 70.00th=[ 7046], 80.00th=[ 7111], 90.00th=[ 7111], 95.00th=[ 7242], 00:33:45.860 | 99.00th=[ 8094], 99.50th=[ 8356], 99.90th=[11469], 99.95th=[12387], 00:33:45.860 | 99.99th=[12518] 00:33:45.860 bw ( KiB/s): min=34024, max=36736, per=99.95%, avg=35732.00, stdev=1194.02, samples=4 00:33:45.860 iops : min= 8506, max= 9184, avg=8933.00, stdev=298.51, samples=4 00:33:45.860 write: IOPS=8954, BW=35.0MiB/s (36.7MB/s)(70.2MiB/2006msec); 0 zone resets 00:33:45.860 slat (nsec): min=1529, max=17829, avg=1750.19, stdev=352.74 00:33:45.860 clat (usec): min=3965, max=12526, avg=7089.44, stdev=271.06 00:33:45.860 lat (usec): min=3971, max=12528, avg=7091.19, stdev=271.04 00:33:45.860 clat percentiles (usec): 00:33:45.860 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 6980], 20.00th=[ 7046], 00:33:45.860 | 30.00th=[ 7046], 40.00th=[ 7046], 50.00th=[ 7046], 60.00th=[ 7111], 00:33:45.860 | 70.00th=[ 7111], 80.00th=[ 7111], 90.00th=[ 7177], 95.00th=[ 7308], 00:33:45.860 | 99.00th=[ 8094], 99.50th=[ 8356], 99.90th=[10683], 99.95th=[11469], 00:33:45.860 | 99.99th=[12518] 00:33:45.860 bw ( KiB/s): min=34848, max=36392, per=99.94%, avg=35796.00, stdev=662.49, samples=4 00:33:45.860 iops : min= 8712, max= 9098, avg=8949.00, stdev=165.62, samples=4 00:33:45.860 lat (msec) : 4=0.02%, 10=99.86%, 20=0.12% 00:33:45.860 cpu : usr=99.45%, sys=0.15%, ctx=14, majf=0, minf=1754 00:33:45.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:45.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:45.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:45.860 issued rwts: total=17929,17962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:45.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:45.860 00:33:45.860 Run status group 0 (all jobs): 00:33:45.860 READ: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.0MiB (73.4MB), run=2006-2006msec 00:33:45.860 WRITE: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.2MiB (73.6MB), run=2006-2006msec 00:33:45.860 ----------------------------------------------------- 00:33:45.860 Suppressions used: 00:33:45.860 count bytes template 00:33:45.860 1 64 /usr/src/fio/parse.c 00:33:45.860 1 8 libtcmalloc_minimal.so 00:33:45.860 ----------------------------------------------------- 00:33:45.860 00:33:45.860 05:50:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:46.148 05:50:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:46.148 05:50:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:56.210 05:50:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:56.210 05:50:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:34:01.487 05:50:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:01.487 05:50:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:04.022 rmmod nvme_rdma 00:34:04.022 rmmod nvme_fabrics 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3531311 ']' 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3531311 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3531311 ']' 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3531311 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.022 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3531311 00:34:04.281 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.281 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.281 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3531311' 00:34:04.281 killing process with pid 3531311 00:34:04.281 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3531311 00:34:04.281 05:51:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3531311 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:06.190 00:34:06.190 real 0m58.900s 00:34:06.190 user 4m3.512s 00:34:06.190 sys 0m13.119s 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.190 ************************************ 00:34:06.190 END TEST nvmf_fio_host 00:34:06.190 ************************************ 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.190 ************************************ 00:34:06.190 START TEST nvmf_failover 00:34:06.190 ************************************ 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:34:06.190 * Looking for test storage... 00:34:06.190 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:34:06.190 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:06.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.191 --rc genhtml_branch_coverage=1 00:34:06.191 --rc genhtml_function_coverage=1 00:34:06.191 --rc genhtml_legend=1 00:34:06.191 --rc geninfo_all_blocks=1 00:34:06.191 --rc geninfo_unexecuted_blocks=1 00:34:06.191 00:34:06.191 ' 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:06.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.191 --rc genhtml_branch_coverage=1 00:34:06.191 --rc genhtml_function_coverage=1 00:34:06.191 --rc genhtml_legend=1 00:34:06.191 --rc geninfo_all_blocks=1 00:34:06.191 --rc geninfo_unexecuted_blocks=1 00:34:06.191 00:34:06.191 ' 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:06.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.191 --rc genhtml_branch_coverage=1 00:34:06.191 --rc genhtml_function_coverage=1 00:34:06.191 --rc genhtml_legend=1 00:34:06.191 --rc geninfo_all_blocks=1 00:34:06.191 --rc geninfo_unexecuted_blocks=1 00:34:06.191 00:34:06.191 ' 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:06.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.191 --rc genhtml_branch_coverage=1 00:34:06.191 --rc genhtml_function_coverage=1 00:34:06.191 --rc genhtml_legend=1 00:34:06.191 --rc geninfo_all_blocks=1 00:34:06.191 --rc geninfo_unexecuted_blocks=1 00:34:06.191 00:34:06.191 ' 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:06.191 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:06.192 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.192 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:06.451 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:34:06.451 05:51:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:14.571 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:14.572 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:14.572 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:14.572 05:51:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:14.572 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:14.572 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:14.572 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:14.572 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:14.572 altname enp217s0f0np0 00:34:14.572 altname ens818f0np0 00:34:14.572 inet 192.168.100.8/24 scope global mlx_0_0 00:34:14.572 valid_lft forever preferred_lft forever 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:14.572 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:14.572 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:14.572 altname enp217s0f1np1 00:34:14.572 altname ens818f1np1 00:34:14.572 inet 192.168.100.9/24 scope global mlx_0_1 00:34:14.572 valid_lft forever preferred_lft forever 00:34:14.572 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:34:14.573 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:14.573 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:14.573 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:14.573 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:14.573 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:14.573 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:14.573 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:14.573 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:14.573 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:14.832 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:14.833 192.168.100.9' 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:14.833 192.168.100.9' 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:14.833 192.168.100.9' 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3544717 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3544717 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3544717 ']' 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.833 05:51:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:14.833 [2024-11-27 05:51:11.346193] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:34:14.833 [2024-11-27 05:51:11.346288] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.092 [2024-11-27 05:51:11.497929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:15.092 [2024-11-27 05:51:11.599445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.092 [2024-11-27 05:51:11.599493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.092 [2024-11-27 05:51:11.599507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.092 [2024-11-27 05:51:11.599521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.092 [2024-11-27 05:51:11.599531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.092 [2024-11-27 05:51:11.602140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:15.092 [2024-11-27 05:51:11.602201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.092 [2024-11-27 05:51:11.602209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:15.661 05:51:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.661 05:51:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:15.661 05:51:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:15.661 05:51:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:15.661 05:51:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:15.661 05:51:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.661 05:51:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:15.920 [2024-11-27 05:51:12.408149] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f84c6b76940) succeed. 00:34:15.920 [2024-11-27 05:51:12.417712] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f84c6b32940) succeed. 00:34:16.180 05:51:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:16.439 Malloc0 00:34:16.439 05:51:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.698 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.957 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:16.957 [2024-11-27 05:51:13.477416] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:16.957 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:17.216 [2024-11-27 05:51:13.681797] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:17.216 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:17.475 [2024-11-27 05:51:13.890561] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:34:17.475 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:17.475 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3545243 00:34:17.475 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:17.475 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3545243 /var/tmp/bdevperf.sock 00:34:17.475 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3545243 ']' 00:34:17.475 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:17.475 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.476 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:17.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:17.476 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.476 05:51:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:18.411 05:51:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.411 05:51:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:18.411 05:51:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:18.671 NVMe0n1 00:34:18.671 05:51:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:18.930 00:34:18.930 05:51:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3545429 00:34:18.930 05:51:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:18.930 05:51:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:19.866 05:51:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:20.125 05:51:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:23.416 05:51:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:23.416 00:34:23.416 05:51:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:23.674 05:51:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:26.964 05:51:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:26.964 [2024-11-27 05:51:23.211418] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:26.964 05:51:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:27.902 05:51:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:27.902 05:51:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3545429 00:34:34.474 { 00:34:34.474 "results": [ 00:34:34.474 { 00:34:34.474 "job": "NVMe0n1", 00:34:34.474 "core_mask": "0x1", 00:34:34.474 "workload": "verify", 00:34:34.474 "status": "finished", 00:34:34.474 "verify_range": { 00:34:34.474 "start": 0, 00:34:34.474 "length": 16384 00:34:34.474 }, 00:34:34.474 "queue_depth": 128, 00:34:34.474 "io_size": 4096, 00:34:34.474 "runtime": 15.006656, 00:34:34.474 "iops": 12364.646727425483, 00:34:34.474 "mibps": 48.29940127900579, 00:34:34.474 "io_failed": 4579, 00:34:34.474 "io_timeout": 0, 00:34:34.474 "avg_latency_us": 10071.92742142628, 00:34:34.474 "min_latency_us": 494.7968, 00:34:34.474 "max_latency_us": 1053609.1648 00:34:34.474 } 00:34:34.474 ], 00:34:34.474 "core_count": 1 00:34:34.474 } 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3545243 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3545243 ']' 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3545243 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3545243 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3545243' 00:34:34.474 killing process with pid 3545243 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3545243 00:34:34.474 05:51:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3545243 00:34:35.053 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:35.053 [2024-11-27 05:51:13.986959] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:34:35.053 [2024-11-27 05:51:13.987059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3545243 ] 00:34:35.053 [2024-11-27 05:51:14.142338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.053 [2024-11-27 05:51:14.246544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.053 Running I/O for 15 seconds... 00:34:35.053 15232.00 IOPS, 59.50 MiB/s [2024-11-27T04:51:31.640Z] 8256.00 IOPS, 32.25 MiB/s [2024-11-27T04:51:31.640Z] [2024-11-27 05:51:17.534144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.053 [2024-11-27 05:51:17.534204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32751 cdw0:0 sqhd:81e0 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.534228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.053 [2024-11-27 05:51:17.534248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32751 cdw0:0 sqhd:81e0 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.534268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.053 [2024-11-27 05:51:17.534290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32751 cdw0:0 sqhd:81e0 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.534308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.053 [2024-11-27 05:51:17.534329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32751 cdw0:0 sqhd:81e0 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.536590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:34:35.053 [2024-11-27 05:51:17.536635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:34:35.053 [2024-11-27 05:51:17.536661] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:34:35.053 [2024-11-27 05:51:17.536683] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:34:35.053 [2024-11-27 05:51:17.536722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.536748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.536842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.536867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.536921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.536946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.536999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.537947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.537972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.538025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.538049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.538100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.538128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.538179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.538204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.053 [2024-11-27 05:51:17.538256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.053 [2024-11-27 05:51:17.538280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.538331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.538356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.538408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.538432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.538483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.538513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.538564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.538589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.538647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.538673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.538723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.538749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.538799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.538824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.538876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.538900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.538950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.538976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.539964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.539989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.540968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.540996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.541046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.541073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.541123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.541149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.541201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.054 [2024-11-27 05:51:17.541226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.054 [2024-11-27 05:51:17.541276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.541301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.541351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.541378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.541429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.541454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.541506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.541531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.541581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.541618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.541671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.541695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.541749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.541774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.541825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.541850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.541901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.541926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.541978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.542003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.542083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.542160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.542238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.542315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.542392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.542468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.055 [2024-11-27 05:51:17.542542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.542624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.542708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.542786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.542868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.542922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.542947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.543952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.543977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.544028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.544053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.544106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.544134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.544185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.544211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.055 [2024-11-27 05:51:17.544262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x182c00 00:34:35.055 [2024-11-27 05:51:17.544290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.544342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.544367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.544419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.544444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.544496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.544521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.544573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.544599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.544663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.544688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.544740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.544767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.544819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.544851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.544905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.544930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.544983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.545924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.545949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.546001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.546028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.546082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.546107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.546160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.546186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.546238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.546262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.546316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.546342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.546394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.546419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.546471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.546496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.546550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.546576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.546647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182c00 00:34:35.056 [2024-11-27 05:51:17.546675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.574946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:35.056 [2024-11-27 05:51:17.574973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:35.056 [2024-11-27 05:51:17.574993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:8 PRP1 0x0 PRP2 0x0 00:34:35.056 [2024-11-27 05:51:17.575015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:17.575224] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:34:35.056 [2024-11-27 05:51:17.575295] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:34:35.056 [2024-11-27 05:51:17.578340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:35.056 [2024-11-27 05:51:17.617460] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:34:35.056 9808.00 IOPS, 38.31 MiB/s [2024-11-27T04:51:31.643Z] 11298.00 IOPS, 44.13 MiB/s [2024-11-27T04:51:31.643Z] 10750.40 IOPS, 41.99 MiB/s [2024-11-27T04:51:31.643Z] [2024-11-27 05:51:21.011152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.056 [2024-11-27 05:51:21.011217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:21.011257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.056 [2024-11-27 05:51:21.011278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:21.011301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.056 [2024-11-27 05:51:21.011322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.056 [2024-11-27 05:51:21.011344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.011369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.011414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.011459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.011501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.011546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.011590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.011641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.011683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.011731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.011775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.011818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.011861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.011904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.011947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.011968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.011989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.057 [2024-11-27 05:51:21.012512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.012557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.012599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.012647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.012692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.057 [2024-11-27 05:51:21.012714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182400 00:34:35.057 [2024-11-27 05:51:21.012735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.012757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.012783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.012804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.012826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.012847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.012868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.012890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.012912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.012933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.012953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.012974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.012997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.013560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.013603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.013650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.013692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.013736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.013779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.013827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.013871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.013913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.013955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.013976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.013997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.014019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.014040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.014062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.014083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.014104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.014126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.014147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.014172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.014194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.014215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.014239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x182400 00:34:35.058 [2024-11-27 05:51:21.014261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.014282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.014304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.014326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.014348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.014368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.058 [2024-11-27 05:51:21.014392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.058 [2024-11-27 05:51:21.014414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.014435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.014480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.014527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.014570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.014615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.014659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.014701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.014746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.014790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.014832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.014877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.014922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.014965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.014986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.015010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.015052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.015094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.015136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.015177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.015222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.015264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.059 [2024-11-27 05:51:21.015335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.015971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.015991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.016012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.016030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.016051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182400 00:34:35.059 [2024-11-27 05:51:21.016070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.059 [2024-11-27 05:51:21.016091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x182400 00:34:35.060 [2024-11-27 05:51:21.016109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x182400 00:34:35.060 [2024-11-27 05:51:21.016149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x182400 00:34:35.060 [2024-11-27 05:51:21.016188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182400 00:34:35.060 [2024-11-27 05:51:21.016228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x182400 00:34:35.060 [2024-11-27 05:51:21.016268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182400 00:34:35.060 [2024-11-27 05:51:21.016310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182400 00:34:35.060 [2024-11-27 05:51:21.016350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x182400 00:34:35.060 [2024-11-27 05:51:21.016390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.060 [2024-11-27 05:51:21.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.060 [2024-11-27 05:51:21.016468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.060 [2024-11-27 05:51:21.016506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.060 [2024-11-27 05:51:21.016545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.060 [2024-11-27 05:51:21.016585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.060 [2024-11-27 05:51:21.016640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.060 [2024-11-27 05:51:21.016679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.016700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.060 [2024-11-27 05:51:21.016717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.018805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:35.060 [2024-11-27 05:51:21.018833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:35.060 [2024-11-27 05:51:21.018853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49144 len:8 PRP1 0x0 PRP2 0x0 00:34:35.060 [2024-11-27 05:51:21.018873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:21.019058] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:34:35.060 [2024-11-27 05:51:21.019085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:34:35.060 [2024-11-27 05:51:21.022437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:34:35.060 [2024-11-27 05:51:21.050706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:34:35.060 [2024-11-27 05:51:21.094839] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:34:35.060 9894.17 IOPS, 38.65 MiB/s [2024-11-27T04:51:31.647Z] 10748.57 IOPS, 41.99 MiB/s [2024-11-27T04:51:31.647Z] 11392.88 IOPS, 44.50 MiB/s [2024-11-27T04:51:31.647Z] 11805.11 IOPS, 46.11 MiB/s [2024-11-27T04:51:31.647Z] [2024-11-27 05:51:25.423821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.423873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.423904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.423922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.423939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.423951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.423966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.423978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.423993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.060 [2024-11-27 05:51:25.424334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182c00 00:34:35.060 [2024-11-27 05:51:25.424346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.424529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.424554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.424580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.424607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.424640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.424666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.424693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.424719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.424929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.424955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.424981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.424995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.425006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.425033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.425059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.425086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.425116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x182c00 00:34:35.061 [2024-11-27 05:51:25.425143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.425169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.425199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.425225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.425253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.425281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.425307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.425333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.061 [2024-11-27 05:51:25.425358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.061 [2024-11-27 05:51:25.425372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.425597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.425628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.425654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.425679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.425704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.425734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.425759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.425784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.425980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.425992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.426023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.426052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.426079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.426106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182c00 00:34:35.062 [2024-11-27 05:51:25.426132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.426158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.426184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.426209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.426234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.062 [2024-11-27 05:51:25.426261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.062 [2024-11-27 05:51:25.426277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.426839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.426985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.426999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.427013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.427025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.427039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.427051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.427064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.427076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.427089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182c00 00:34:35.063 [2024-11-27 05:51:25.427101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.427115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.427126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.427140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.427151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.427165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.427177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.427191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.427202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.427215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:35.063 [2024-11-27 05:51:25.427227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.429237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:35.063 [2024-11-27 05:51:25.429258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:35.063 [2024-11-27 05:51:25.429271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88976 len:8 PRP1 0x0 PRP2 0x0 00:34:35.063 [2024-11-27 05:51:25.429286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.063 [2024-11-27 05:51:25.429469] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:34:35.064 [2024-11-27 05:51:25.429486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:34:35.064 [2024-11-27 05:51:25.432573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:34:35.064 [2024-11-27 05:51:25.463326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:34:35.064 10624.60 IOPS, 41.50 MiB/s [2024-11-27T04:51:31.651Z] [2024-11-27 05:51:25.507463] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:34:35.064 11049.27 IOPS, 43.16 MiB/s [2024-11-27T04:51:31.651Z] 11464.50 IOPS, 44.78 MiB/s [2024-11-27T04:51:31.651Z] 11813.00 IOPS, 46.14 MiB/s [2024-11-27T04:51:31.651Z] 12112.07 IOPS, 47.31 MiB/s [2024-11-27T04:51:31.651Z] 12367.33 IOPS, 48.31 MiB/s 00:34:35.064 Latency(us) 00:34:35.064 [2024-11-27T04:51:31.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.064 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:35.064 Verification LBA range: start 0x0 length 0x4000 00:34:35.064 NVMe0n1 : 15.01 12364.65 48.30 305.13 0.00 10071.93 494.80 1053609.16 00:34:35.064 [2024-11-27T04:51:31.651Z] =================================================================================================================== 00:34:35.064 [2024-11-27T04:51:31.651Z] Total : 12364.65 48.30 305.13 0.00 10071.93 494.80 1053609.16 00:34:35.064 Received shutdown signal, test time was about 15.000000 seconds 00:34:35.064 00:34:35.064 Latency(us) 00:34:35.064 [2024-11-27T04:51:31.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.064 [2024-11-27T04:51:31.651Z] =================================================================================================================== 00:34:35.064 [2024-11-27T04:51:31.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3548076 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3548076 /var/tmp/bdevperf.sock 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3548076 ']' 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:35.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.064 05:51:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:36.002 05:51:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.002 05:51:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:36.002 05:51:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:36.261 [2024-11-27 05:51:32.641122] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:36.261 05:51:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:36.261 [2024-11-27 05:51:32.829784] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:34:36.521 05:51:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:36.780 NVMe0n1 00:34:36.780 05:51:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:37.040 00:34:37.040 05:51:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:37.299 00:34:37.299 05:51:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:37.299 05:51:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:37.299 05:51:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:37.558 05:51:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:40.847 05:51:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:40.847 05:51:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:40.847 05:51:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3549014 00:34:40.847 05:51:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:40.847 05:51:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3549014 00:34:42.227 { 00:34:42.227 "results": [ 00:34:42.227 { 00:34:42.227 "job": "NVMe0n1", 00:34:42.227 "core_mask": "0x1", 00:34:42.227 "workload": "verify", 00:34:42.227 "status": "finished", 00:34:42.227 "verify_range": { 00:34:42.227 "start": 0, 00:34:42.227 "length": 16384 00:34:42.227 }, 00:34:42.227 "queue_depth": 128, 00:34:42.227 "io_size": 4096, 00:34:42.227 "runtime": 1.010694, 00:34:42.227 "iops": 15704.060774081967, 00:34:42.227 "mibps": 61.34398739875768, 00:34:42.227 "io_failed": 0, 00:34:42.227 "io_timeout": 0, 00:34:42.227 "avg_latency_us": 8104.777909677418, 00:34:42.227 "min_latency_us": 3198.1568, 00:34:42.227 "max_latency_us": 12006.1952 00:34:42.227 } 00:34:42.227 ], 00:34:42.227 "core_count": 1 00:34:42.227 } 00:34:42.227 05:51:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:42.227 [2024-11-27 05:51:31.669387] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:34:42.227 [2024-11-27 05:51:31.669489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548076 ] 00:34:42.227 [2024-11-27 05:51:31.824327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.227 [2024-11-27 05:51:31.930545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.227 [2024-11-27 05:51:34.017554] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:34:42.227 [2024-11-27 05:51:34.018180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:34:42.227 [2024-11-27 05:51:34.018246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:34:42.227 [2024-11-27 05:51:34.047479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:34:42.227 [2024-11-27 05:51:34.071119] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:34:42.227 Running I/O for 1 seconds... 00:34:42.227 15683.00 IOPS, 61.26 MiB/s 00:34:42.227 Latency(us) 00:34:42.227 [2024-11-27T04:51:38.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.227 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:42.227 Verification LBA range: start 0x0 length 0x4000 00:34:42.227 NVMe0n1 : 1.01 15704.06 61.34 0.00 0.00 8104.78 3198.16 12006.20 00:34:42.227 [2024-11-27T04:51:38.815Z] =================================================================================================================== 00:34:42.228 [2024-11-27T04:51:38.815Z] Total : 15704.06 61.34 0.00 0.00 8104.78 3198.16 12006.20 00:34:42.228 05:51:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:42.228 05:51:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:42.228 05:51:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:42.228 05:51:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:42.228 05:51:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:42.487 05:51:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:42.745 05:51:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3548076 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3548076 ']' 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3548076 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3548076 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3548076' 00:34:46.033 killing process with pid 3548076 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3548076 00:34:46.033 05:51:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3548076 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:46.968 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:47.227 rmmod nvme_rdma 00:34:47.227 rmmod nvme_fabrics 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3544717 ']' 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3544717 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3544717 ']' 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3544717 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3544717 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3544717' 00:34:47.227 killing process with pid 3544717 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3544717 00:34:47.227 05:51:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3544717 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:49.131 00:34:49.131 real 0m42.779s 00:34:49.131 user 2m15.779s 00:34:49.131 sys 0m9.469s 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:49.131 ************************************ 00:34:49.131 END TEST nvmf_failover 00:34:49.131 ************************************ 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.131 ************************************ 00:34:49.131 START TEST nvmf_host_discovery 00:34:49.131 ************************************ 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:34:49.131 * Looking for test storage... 00:34:49.131 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:49.131 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.132 --rc genhtml_branch_coverage=1 00:34:49.132 --rc genhtml_function_coverage=1 00:34:49.132 --rc genhtml_legend=1 00:34:49.132 --rc geninfo_all_blocks=1 00:34:49.132 --rc geninfo_unexecuted_blocks=1 00:34:49.132 00:34:49.132 ' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.132 --rc genhtml_branch_coverage=1 00:34:49.132 --rc genhtml_function_coverage=1 00:34:49.132 --rc genhtml_legend=1 00:34:49.132 --rc geninfo_all_blocks=1 00:34:49.132 --rc geninfo_unexecuted_blocks=1 00:34:49.132 00:34:49.132 ' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.132 --rc genhtml_branch_coverage=1 00:34:49.132 --rc genhtml_function_coverage=1 00:34:49.132 --rc genhtml_legend=1 00:34:49.132 --rc geninfo_all_blocks=1 00:34:49.132 --rc geninfo_unexecuted_blocks=1 00:34:49.132 00:34:49.132 ' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.132 --rc genhtml_branch_coverage=1 00:34:49.132 --rc genhtml_function_coverage=1 00:34:49.132 --rc genhtml_legend=1 00:34:49.132 --rc geninfo_all_blocks=1 00:34:49.132 --rc geninfo_unexecuted_blocks=1 00:34:49.132 00:34:49.132 ' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:49.132 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:34:49.132 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:34:49.132 00:34:49.132 real 0m0.177s 00:34:49.132 user 0m0.099s 00:34:49.132 sys 0m0.087s 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.132 ************************************ 00:34:49.132 END TEST nvmf_host_discovery 00:34:49.132 ************************************ 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.132 05:51:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.132 ************************************ 00:34:49.132 START TEST nvmf_host_multipath_status 00:34:49.132 ************************************ 00:34:49.133 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:34:49.392 * Looking for test storage... 00:34:49.392 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:49.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.392 --rc genhtml_branch_coverage=1 00:34:49.392 --rc genhtml_function_coverage=1 00:34:49.392 --rc genhtml_legend=1 00:34:49.392 --rc geninfo_all_blocks=1 00:34:49.392 --rc geninfo_unexecuted_blocks=1 00:34:49.392 00:34:49.392 ' 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:49.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.392 --rc genhtml_branch_coverage=1 00:34:49.392 --rc genhtml_function_coverage=1 00:34:49.392 --rc genhtml_legend=1 00:34:49.392 --rc geninfo_all_blocks=1 00:34:49.392 --rc geninfo_unexecuted_blocks=1 00:34:49.392 00:34:49.392 ' 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:49.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.392 --rc genhtml_branch_coverage=1 00:34:49.392 --rc genhtml_function_coverage=1 00:34:49.392 --rc genhtml_legend=1 00:34:49.392 --rc geninfo_all_blocks=1 00:34:49.392 --rc geninfo_unexecuted_blocks=1 00:34:49.392 00:34:49.392 ' 00:34:49.392 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:49.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.392 --rc genhtml_branch_coverage=1 00:34:49.392 --rc genhtml_function_coverage=1 00:34:49.392 --rc genhtml_legend=1 00:34:49.393 --rc geninfo_all_blocks=1 00:34:49.393 --rc geninfo_unexecuted_blocks=1 00:34:49.393 00:34:49.393 ' 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:49.393 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:49.393 05:51:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:57.515 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:57.515 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:57.515 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:57.515 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:57.515 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:57.516 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:57.516 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:57.516 altname enp217s0f0np0 00:34:57.516 altname ens818f0np0 00:34:57.516 inet 192.168.100.8/24 scope global mlx_0_0 00:34:57.516 valid_lft forever preferred_lft forever 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:57.516 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:57.516 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:57.516 altname enp217s0f1np1 00:34:57.516 altname ens818f1np1 00:34:57.516 inet 192.168.100.9/24 scope global mlx_0_1 00:34:57.516 valid_lft forever preferred_lft forever 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:57.516 192.168.100.9' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:57.516 192.168.100.9' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:57.516 192.168.100.9' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3554313 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3554313 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3554313 ']' 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:57.516 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:57.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:57.517 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:57.517 05:51:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:57.517 [2024-11-27 05:51:53.795830] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:34:57.517 [2024-11-27 05:51:53.795932] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:57.517 [2024-11-27 05:51:53.950272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:57.517 [2024-11-27 05:51:54.046302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:57.517 [2024-11-27 05:51:54.046356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:57.517 [2024-11-27 05:51:54.046369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:57.517 [2024-11-27 05:51:54.046382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:57.517 [2024-11-27 05:51:54.046391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:57.517 [2024-11-27 05:51:54.048577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.517 [2024-11-27 05:51:54.048586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.082 05:51:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:58.082 05:51:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:58.082 05:51:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:58.082 05:51:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:58.082 05:51:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:58.082 05:51:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:58.082 05:51:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3554313 00:34:58.082 05:51:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:58.340 [2024-11-27 05:51:54.828184] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7fb42fd53940) succeed. 00:34:58.340 [2024-11-27 05:51:54.837488] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7fb42fd0e940) succeed. 00:34:58.598 05:51:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:58.856 Malloc0 00:34:58.856 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:59.114 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:59.114 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:59.372 [2024-11-27 05:51:55.787178] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:59.372 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:59.630 [2024-11-27 05:51:55.975620] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:59.630 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3554695 00:34:59.630 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:59.630 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3554695 /var/tmp/bdevperf.sock 00:34:59.630 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3554695 ']' 00:34:59.630 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:59.630 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.631 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:59.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:59.631 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:59.631 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.631 05:51:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:00.566 05:51:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:00.566 05:51:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:35:00.566 05:51:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:00.567 05:51:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:00.825 Nvme0n1 00:35:00.825 05:51:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:01.083 Nvme0n1 00:35:01.083 05:51:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:01.083 05:51:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:03.614 05:51:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:03.614 05:51:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:35:03.614 05:51:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:03.614 05:51:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:04.548 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:04.548 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:04.548 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.548 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:04.807 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.807 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:04.807 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.807 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:05.065 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:05.065 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:05.065 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.065 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:05.065 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.065 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:05.065 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.065 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:05.324 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.324 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:05.324 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.324 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:05.583 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.583 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:05.583 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.583 05:52:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:05.583 05:52:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.583 05:52:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:05.583 05:52:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:05.894 05:52:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:06.195 05:52:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:07.131 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:07.131 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:07.131 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.131 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:07.389 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:07.389 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:07.389 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.389 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:07.389 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.389 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:07.389 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.389 05:52:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:07.648 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.648 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:07.648 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.648 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:07.907 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.907 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:07.907 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.907 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:08.166 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.166 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:08.166 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.166 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:08.166 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.166 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:08.166 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:08.424 05:52:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:35:08.683 05:52:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:09.619 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:09.619 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:09.619 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.619 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:09.877 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.877 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:09.877 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.877 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:10.136 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:10.136 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:10.136 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.137 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:10.137 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.137 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:10.137 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.137 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:10.395 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.395 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:10.395 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.395 05:52:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:10.653 05:52:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.653 05:52:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:10.653 05:52:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.653 05:52:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:10.912 05:52:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.912 05:52:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:10.912 05:52:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:10.912 05:52:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:35:11.171 05:52:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:12.107 05:52:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:12.107 05:52:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:12.107 05:52:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.107 05:52:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:12.367 05:52:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.367 05:52:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:12.367 05:52:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.367 05:52:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:12.625 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:12.625 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:12.626 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.626 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:12.883 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.883 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:12.883 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.883 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:13.141 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.141 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:13.141 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.141 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:13.141 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.141 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:13.141 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.141 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:13.400 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:13.400 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:13.400 05:52:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:35:13.659 05:52:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:35:13.918 05:52:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:14.853 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:14.853 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:14.853 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.853 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:15.111 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:15.111 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:15.111 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.111 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:15.111 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:15.111 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:15.111 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.111 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:15.370 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.370 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:15.370 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.370 05:52:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:15.629 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.629 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:15.629 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.629 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:15.629 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:15.629 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:15.629 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.629 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:15.888 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:15.888 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:15.888 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:35:16.146 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:16.405 05:52:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:17.340 05:52:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:17.340 05:52:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:17.340 05:52:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.340 05:52:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:17.598 05:52:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:17.599 05:52:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:17.599 05:52:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.599 05:52:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:17.857 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:17.857 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:17.857 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.857 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:17.857 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:17.857 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:17.857 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.857 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:18.115 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.115 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:18.115 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.115 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:18.373 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:18.374 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:18.374 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.374 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:18.632 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.632 05:52:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:18.632 05:52:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:18.632 05:52:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:35:18.890 05:52:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:19.148 05:52:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:20.083 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:20.083 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:20.084 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.084 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:20.342 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.342 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:20.342 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.342 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:20.601 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.601 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:20.601 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.601 05:52:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:20.601 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.601 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:20.601 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.601 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:20.860 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.860 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:20.860 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.860 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:21.117 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.117 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:21.118 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.118 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:21.376 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.376 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:21.376 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:21.376 05:52:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:21.635 05:52:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:22.571 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:22.571 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:22.571 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:22.571 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:22.831 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:22.831 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:22.831 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:22.831 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:23.090 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.090 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:23.090 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.090 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:23.349 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.349 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:23.349 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.349 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:23.349 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.349 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:23.349 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.349 05:52:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:23.608 05:52:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.608 05:52:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:23.608 05:52:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.608 05:52:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:23.866 05:52:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.866 05:52:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:23.866 05:52:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:23.866 05:52:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:35:24.124 05:52:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:25.496 05:52:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:25.496 05:52:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:25.496 05:52:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:25.496 05:52:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:25.496 05:52:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:25.496 05:52:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:25.496 05:52:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:25.496 05:52:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:25.496 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:25.496 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:25.496 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:25.496 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:25.753 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:25.753 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:25.753 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:25.753 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:26.011 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.011 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:26.011 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.011 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:26.269 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.269 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:26.269 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.269 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:26.269 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.269 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:26.269 05:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:26.528 05:52:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:35:26.785 05:52:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:27.719 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:27.719 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:27.719 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.719 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:27.977 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.977 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:27.977 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.977 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:28.235 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:28.235 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:28.235 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:28.235 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.235 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:28.236 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:28.236 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.236 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:28.494 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:28.494 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:28.494 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.494 05:52:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:28.753 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:28.753 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:28.753 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.753 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3554695 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3554695 ']' 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3554695 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3554695 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3554695' 00:35:29.012 killing process with pid 3554695 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3554695 00:35:29.012 05:52:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3554695 00:35:29.012 { 00:35:29.012 "results": [ 00:35:29.012 { 00:35:29.012 "job": "Nvme0n1", 00:35:29.012 "core_mask": "0x4", 00:35:29.012 "workload": "verify", 00:35:29.012 "status": "terminated", 00:35:29.012 "verify_range": { 00:35:29.012 "start": 0, 00:35:29.012 "length": 16384 00:35:29.012 }, 00:35:29.012 "queue_depth": 128, 00:35:29.012 "io_size": 4096, 00:35:29.012 "runtime": 27.688163, 00:35:29.012 "iops": 14014.400305285692, 00:35:29.012 "mibps": 54.74375119252223, 00:35:29.012 "io_failed": 0, 00:35:29.012 "io_timeout": 0, 00:35:29.012 "avg_latency_us": 9111.357020591548, 00:35:29.012 "min_latency_us": 809.3696, 00:35:29.012 "max_latency_us": 3019898.88 00:35:29.012 } 00:35:29.012 ], 00:35:29.012 "core_count": 1 00:35:29.012 } 00:35:29.952 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3554695 00:35:29.952 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:29.952 [2024-11-27 05:51:56.074993] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:35:29.952 [2024-11-27 05:51:56.075098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554695 ] 00:35:29.952 [2024-11-27 05:51:56.227654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.952 [2024-11-27 05:51:56.329366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:29.952 Running I/O for 90 seconds... 00:35:29.952 16128.00 IOPS, 63.00 MiB/s [2024-11-27T04:52:26.539Z] 16194.00 IOPS, 63.26 MiB/s [2024-11-27T04:52:26.539Z] 16170.67 IOPS, 63.17 MiB/s [2024-11-27T04:52:26.539Z] 16177.75 IOPS, 63.19 MiB/s [2024-11-27T04:52:26.539Z] 16193.40 IOPS, 63.26 MiB/s [2024-11-27T04:52:26.539Z] 16233.67 IOPS, 63.41 MiB/s [2024-11-27T04:52:26.539Z] 16237.71 IOPS, 63.43 MiB/s [2024-11-27T04:52:26.539Z] 16242.25 IOPS, 63.45 MiB/s [2024-11-27T04:52:26.539Z] 16256.78 IOPS, 63.50 MiB/s [2024-11-27T04:52:26.539Z] 16256.00 IOPS, 63.50 MiB/s [2024-11-27T04:52:26.539Z] 16270.45 IOPS, 63.56 MiB/s [2024-11-27T04:52:26.539Z] 16276.50 IOPS, 63.58 MiB/s [2024-11-27T04:52:26.539Z] [2024-11-27 05:52:10.033637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.033694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.033748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.033766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.033784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.033799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.033816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.033831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.033846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.033868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.033884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.033899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.033915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.033930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.033946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.033960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.033976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.033991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.034006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.034025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.034040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.034055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.034071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.034086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.034101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.034118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.034134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.034148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.034164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.034178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.034194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.034208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.034224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.952 [2024-11-27 05:52:10.034240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.952 [2024-11-27 05:52:10.034255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.034974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.034990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x182500 00:35:29.953 [2024-11-27 05:52:10.035328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.035362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.035392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.953 [2024-11-27 05:52:10.035422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:29.953 [2024-11-27 05:52:10.035438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.035984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.035999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.954 [2024-11-27 05:52:10.036537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x182500 00:35:29.954 [2024-11-27 05:52:10.036569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x182500 00:35:29.954 [2024-11-27 05:52:10.036599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x182500 00:35:29.954 [2024-11-27 05:52:10.036636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:29.954 [2024-11-27 05:52:10.036651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x182500 00:35:29.954 [2024-11-27 05:52:10.036666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.036698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.036728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.036763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.036794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.036823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.036854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.955 [2024-11-27 05:52:10.036885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.955 [2024-11-27 05:52:10.036916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.955 [2024-11-27 05:52:10.036946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.955 [2024-11-27 05:52:10.036975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.036990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.955 [2024-11-27 05:52:10.037005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.955 [2024-11-27 05:52:10.037036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.955 [2024-11-27 05:52:10.037065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.037964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.037979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.038001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.038015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.038037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.038051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.038072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.038089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.038111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.038125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.038147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.038164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:29.955 [2024-11-27 05:52:10.038185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x182500 00:35:29.955 [2024-11-27 05:52:10.038200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:29.955 15399.92 IOPS, 60.16 MiB/s [2024-11-27T04:52:26.542Z] 14299.93 IOPS, 55.86 MiB/s [2024-11-27T04:52:26.542Z] 13346.60 IOPS, 52.14 MiB/s [2024-11-27T04:52:26.542Z] 13229.44 IOPS, 51.68 MiB/s [2024-11-27T04:52:26.542Z] 13418.82 IOPS, 52.42 MiB/s [2024-11-27T04:52:26.542Z] 13525.22 IOPS, 52.83 MiB/s [2024-11-27T04:52:26.542Z] 13539.00 IOPS, 52.89 MiB/s [2024-11-27T04:52:26.542Z] 13531.75 IOPS, 52.86 MiB/s [2024-11-27T04:52:26.542Z] 13644.67 IOPS, 53.30 MiB/s [2024-11-27T04:52:26.542Z] 13768.09 IOPS, 53.78 MiB/s [2024-11-27T04:52:26.542Z] 13860.17 IOPS, 54.14 MiB/s [2024-11-27T04:52:26.542Z] 13843.54 IOPS, 54.08 MiB/s [2024-11-27T04:52:26.542Z] 13829.48 IOPS, 54.02 MiB/s [2024-11-27T04:52:26.542Z] [2024-11-27 05:52:23.172465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.172521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.172586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.172604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.173907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.173982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.173999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.174014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.174030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.174046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182500 00:35:29.956 [2024-11-27 05:52:23.174060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.174075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.956 [2024-11-27 05:52:23.174089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:29.956 [2024-11-27 05:52:23.174105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.174119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.174148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.174419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.174482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.174602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.174639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.174669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.174731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.174823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.174913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.174975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.174991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.175007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.175022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.175036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.175052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.175066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.175081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.175095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.175112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.175127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.175142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.175157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.175172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.175188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.175204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.175219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.175233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.957 [2024-11-27 05:52:23.175248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:29.957 [2024-11-27 05:52:23.175263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182500 00:35:29.957 [2024-11-27 05:52:23.175278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:29.957 13877.23 IOPS, 54.21 MiB/s [2024-11-27T04:52:26.544Z] 13966.00 IOPS, 54.55 MiB/s [2024-11-27T04:52:26.544Z] Received shutdown signal, test time was about 27.688824 seconds 00:35:29.957 00:35:29.957 Latency(us) 00:35:29.957 [2024-11-27T04:52:26.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.957 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:29.957 Verification LBA range: start 0x0 length 0x4000 00:35:29.957 Nvme0n1 : 27.69 14014.40 54.74 0.00 0.00 9111.36 809.37 3019898.88 00:35:29.957 [2024-11-27T04:52:26.544Z] =================================================================================================================== 00:35:29.957 [2024-11-27T04:52:26.544Z] Total : 14014.40 54.74 0.00 0.00 9111.36 809.37 3019898.88 00:35:29.957 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:30.216 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:30.216 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:30.216 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:30.216 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:30.216 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:30.216 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:30.216 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:30.216 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:30.216 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:30.216 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:30.217 rmmod nvme_rdma 00:35:30.217 rmmod nvme_fabrics 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3554313 ']' 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3554313 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3554313 ']' 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3554313 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3554313 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3554313' 00:35:30.217 killing process with pid 3554313 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3554313 00:35:30.217 05:52:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3554313 00:35:32.118 05:52:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:32.118 05:52:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:35:32.118 00:35:32.118 real 0m42.524s 00:35:32.118 user 1m55.651s 00:35:32.118 sys 0m10.521s 00:35:32.118 05:52:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.118 05:52:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:32.118 ************************************ 00:35:32.118 END TEST nvmf_host_multipath_status 00:35:32.118 ************************************ 00:35:32.118 05:52:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:35:32.118 05:52:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:32.118 05:52:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.118 05:52:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.118 ************************************ 00:35:32.118 START TEST nvmf_discovery_remove_ifc 00:35:32.118 ************************************ 00:35:32.118 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:35:32.118 * Looking for test storage... 00:35:32.119 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:32.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.119 --rc genhtml_branch_coverage=1 00:35:32.119 --rc genhtml_function_coverage=1 00:35:32.119 --rc genhtml_legend=1 00:35:32.119 --rc geninfo_all_blocks=1 00:35:32.119 --rc geninfo_unexecuted_blocks=1 00:35:32.119 00:35:32.119 ' 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:32.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.119 --rc genhtml_branch_coverage=1 00:35:32.119 --rc genhtml_function_coverage=1 00:35:32.119 --rc genhtml_legend=1 00:35:32.119 --rc geninfo_all_blocks=1 00:35:32.119 --rc geninfo_unexecuted_blocks=1 00:35:32.119 00:35:32.119 ' 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:32.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.119 --rc genhtml_branch_coverage=1 00:35:32.119 --rc genhtml_function_coverage=1 00:35:32.119 --rc genhtml_legend=1 00:35:32.119 --rc geninfo_all_blocks=1 00:35:32.119 --rc geninfo_unexecuted_blocks=1 00:35:32.119 00:35:32.119 ' 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:32.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.119 --rc genhtml_branch_coverage=1 00:35:32.119 --rc genhtml_function_coverage=1 00:35:32.119 --rc genhtml_legend=1 00:35:32.119 --rc geninfo_all_blocks=1 00:35:32.119 --rc geninfo_unexecuted_blocks=1 00:35:32.119 00:35:32.119 ' 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:32.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:32.119 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:35:32.120 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:35:32.120 00:35:32.120 real 0m0.233s 00:35:32.120 user 0m0.134s 00:35:32.120 sys 0m0.118s 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:32.120 ************************************ 00:35:32.120 END TEST nvmf_discovery_remove_ifc 00:35:32.120 ************************************ 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.120 ************************************ 00:35:32.120 START TEST nvmf_identify_kernel_target 00:35:32.120 ************************************ 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:35:32.120 * Looking for test storage... 00:35:32.120 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:32.120 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:32.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.380 --rc genhtml_branch_coverage=1 00:35:32.380 --rc genhtml_function_coverage=1 00:35:32.380 --rc genhtml_legend=1 00:35:32.380 --rc geninfo_all_blocks=1 00:35:32.380 --rc geninfo_unexecuted_blocks=1 00:35:32.380 00:35:32.380 ' 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:32.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.380 --rc genhtml_branch_coverage=1 00:35:32.380 --rc genhtml_function_coverage=1 00:35:32.380 --rc genhtml_legend=1 00:35:32.380 --rc geninfo_all_blocks=1 00:35:32.380 --rc geninfo_unexecuted_blocks=1 00:35:32.380 00:35:32.380 ' 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:32.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.380 --rc genhtml_branch_coverage=1 00:35:32.380 --rc genhtml_function_coverage=1 00:35:32.380 --rc genhtml_legend=1 00:35:32.380 --rc geninfo_all_blocks=1 00:35:32.380 --rc geninfo_unexecuted_blocks=1 00:35:32.380 00:35:32.380 ' 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:32.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.380 --rc genhtml_branch_coverage=1 00:35:32.380 --rc genhtml_function_coverage=1 00:35:32.380 --rc genhtml_legend=1 00:35:32.380 --rc geninfo_all_blocks=1 00:35:32.380 --rc geninfo_unexecuted_blocks=1 00:35:32.380 00:35:32.380 ' 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.380 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:32.381 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:32.381 05:52:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:35:40.494 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:35:40.494 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:35:40.494 Found net devices under 0000:d9:00.0: mlx_0_0 00:35:40.494 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:35:40.754 Found net devices under 0000:d9:00.1: mlx_0_1 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:35:40.754 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:40.754 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:35:40.754 altname enp217s0f0np0 00:35:40.754 altname ens818f0np0 00:35:40.754 inet 192.168.100.8/24 scope global mlx_0_0 00:35:40.754 valid_lft forever preferred_lft forever 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:35:40.754 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:40.754 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:35:40.754 altname enp217s0f1np1 00:35:40.754 altname ens818f1np1 00:35:40.754 inet 192.168.100.9/24 scope global mlx_0_1 00:35:40.754 valid_lft forever preferred_lft forever 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:40.754 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:35:40.755 192.168.100.9' 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:35:40.755 192.168.100.9' 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:35:40.755 192.168.100.9' 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:40.755 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:41.012 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:41.012 05:52:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:35:44.295 Waiting for block devices as requested 00:35:44.295 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:44.295 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:44.295 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:44.295 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:44.554 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:44.554 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:44.554 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:44.811 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:44.811 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:44.811 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:45.071 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:45.071 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:45.071 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:45.330 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:45.330 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:45.330 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:45.589 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:45.589 No valid GPT data, bailing 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:45.589 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:35:45.847 00:35:45.847 Discovery Log Number of Records 2, Generation counter 2 00:35:45.847 =====Discovery Log Entry 0====== 00:35:45.847 trtype: rdma 00:35:45.847 adrfam: ipv4 00:35:45.847 subtype: current discovery subsystem 00:35:45.847 treq: not specified, sq flow control disable supported 00:35:45.847 portid: 1 00:35:45.847 trsvcid: 4420 00:35:45.847 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:45.847 traddr: 192.168.100.8 00:35:45.847 eflags: none 00:35:45.847 rdma_prtype: not specified 00:35:45.847 rdma_qptype: connected 00:35:45.847 rdma_cms: rdma-cm 00:35:45.847 rdma_pkey: 0x0000 00:35:45.847 =====Discovery Log Entry 1====== 00:35:45.847 trtype: rdma 00:35:45.847 adrfam: ipv4 00:35:45.847 subtype: nvme subsystem 00:35:45.847 treq: not specified, sq flow control disable supported 00:35:45.847 portid: 1 00:35:45.847 trsvcid: 4420 00:35:45.847 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:45.847 traddr: 192.168.100.8 00:35:45.847 eflags: none 00:35:45.847 rdma_prtype: not specified 00:35:45.847 rdma_qptype: connected 00:35:45.847 rdma_cms: rdma-cm 00:35:45.847 rdma_pkey: 0x0000 00:35:45.847 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:35:45.847 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:46.106 ===================================================== 00:35:46.106 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:46.106 ===================================================== 00:35:46.106 Controller Capabilities/Features 00:35:46.106 ================================ 00:35:46.106 Vendor ID: 0000 00:35:46.106 Subsystem Vendor ID: 0000 00:35:46.106 Serial Number: 1337ecbfd69829e58888 00:35:46.106 Model Number: Linux 00:35:46.106 Firmware Version: 6.8.9-20 00:35:46.106 Recommended Arb Burst: 0 00:35:46.106 IEEE OUI Identifier: 00 00 00 00:35:46.106 Multi-path I/O 00:35:46.106 May have multiple subsystem ports: No 00:35:46.106 May have multiple controllers: No 00:35:46.106 Associated with SR-IOV VF: No 00:35:46.106 Max Data Transfer Size: Unlimited 00:35:46.106 Max Number of Namespaces: 0 00:35:46.106 Max Number of I/O Queues: 1024 00:35:46.106 NVMe Specification Version (VS): 1.3 00:35:46.106 NVMe Specification Version (Identify): 1.3 00:35:46.106 Maximum Queue Entries: 128 00:35:46.106 Contiguous Queues Required: No 00:35:46.106 Arbitration Mechanisms Supported 00:35:46.106 Weighted Round Robin: Not Supported 00:35:46.106 Vendor Specific: Not Supported 00:35:46.106 Reset Timeout: 7500 ms 00:35:46.106 Doorbell Stride: 4 bytes 00:35:46.106 NVM Subsystem Reset: Not Supported 00:35:46.106 Command Sets Supported 00:35:46.106 NVM Command Set: Supported 00:35:46.107 Boot Partition: Not Supported 00:35:46.107 Memory Page Size Minimum: 4096 bytes 00:35:46.107 Memory Page Size Maximum: 4096 bytes 00:35:46.107 Persistent Memory Region: Not Supported 00:35:46.107 Optional Asynchronous Events Supported 00:35:46.107 Namespace Attribute Notices: Not Supported 00:35:46.107 Firmware Activation Notices: Not Supported 00:35:46.107 ANA Change Notices: Not Supported 00:35:46.107 PLE Aggregate Log Change Notices: Not Supported 00:35:46.107 LBA Status Info Alert Notices: Not Supported 00:35:46.107 EGE Aggregate Log Change Notices: Not Supported 00:35:46.107 Normal NVM Subsystem Shutdown event: Not Supported 00:35:46.107 Zone Descriptor Change Notices: Not Supported 00:35:46.107 Discovery Log Change Notices: Supported 00:35:46.107 Controller Attributes 00:35:46.107 128-bit Host Identifier: Not Supported 00:35:46.107 Non-Operational Permissive Mode: Not Supported 00:35:46.107 NVM Sets: Not Supported 00:35:46.107 Read Recovery Levels: Not Supported 00:35:46.107 Endurance Groups: Not Supported 00:35:46.107 Predictable Latency Mode: Not Supported 00:35:46.107 Traffic Based Keep ALive: Not Supported 00:35:46.107 Namespace Granularity: Not Supported 00:35:46.107 SQ Associations: Not Supported 00:35:46.107 UUID List: Not Supported 00:35:46.107 Multi-Domain Subsystem: Not Supported 00:35:46.107 Fixed Capacity Management: Not Supported 00:35:46.107 Variable Capacity Management: Not Supported 00:35:46.107 Delete Endurance Group: Not Supported 00:35:46.107 Delete NVM Set: Not Supported 00:35:46.107 Extended LBA Formats Supported: Not Supported 00:35:46.107 Flexible Data Placement Supported: Not Supported 00:35:46.107 00:35:46.107 Controller Memory Buffer Support 00:35:46.107 ================================ 00:35:46.107 Supported: No 00:35:46.107 00:35:46.107 Persistent Memory Region Support 00:35:46.107 ================================ 00:35:46.107 Supported: No 00:35:46.107 00:35:46.107 Admin Command Set Attributes 00:35:46.107 ============================ 00:35:46.107 Security Send/Receive: Not Supported 00:35:46.107 Format NVM: Not Supported 00:35:46.107 Firmware Activate/Download: Not Supported 00:35:46.107 Namespace Management: Not Supported 00:35:46.107 Device Self-Test: Not Supported 00:35:46.107 Directives: Not Supported 00:35:46.107 NVMe-MI: Not Supported 00:35:46.107 Virtualization Management: Not Supported 00:35:46.107 Doorbell Buffer Config: Not Supported 00:35:46.107 Get LBA Status Capability: Not Supported 00:35:46.107 Command & Feature Lockdown Capability: Not Supported 00:35:46.107 Abort Command Limit: 1 00:35:46.107 Async Event Request Limit: 1 00:35:46.107 Number of Firmware Slots: N/A 00:35:46.107 Firmware Slot 1 Read-Only: N/A 00:35:46.107 Firmware Activation Without Reset: N/A 00:35:46.107 Multiple Update Detection Support: N/A 00:35:46.107 Firmware Update Granularity: No Information Provided 00:35:46.107 Per-Namespace SMART Log: No 00:35:46.107 Asymmetric Namespace Access Log Page: Not Supported 00:35:46.107 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:46.107 Command Effects Log Page: Not Supported 00:35:46.107 Get Log Page Extended Data: Supported 00:35:46.107 Telemetry Log Pages: Not Supported 00:35:46.107 Persistent Event Log Pages: Not Supported 00:35:46.107 Supported Log Pages Log Page: May Support 00:35:46.107 Commands Supported & Effects Log Page: Not Supported 00:35:46.107 Feature Identifiers & Effects Log Page:May Support 00:35:46.107 NVMe-MI Commands & Effects Log Page: May Support 00:35:46.107 Data Area 4 for Telemetry Log: Not Supported 00:35:46.107 Error Log Page Entries Supported: 1 00:35:46.107 Keep Alive: Not Supported 00:35:46.107 00:35:46.107 NVM Command Set Attributes 00:35:46.107 ========================== 00:35:46.107 Submission Queue Entry Size 00:35:46.107 Max: 1 00:35:46.107 Min: 1 00:35:46.107 Completion Queue Entry Size 00:35:46.107 Max: 1 00:35:46.107 Min: 1 00:35:46.107 Number of Namespaces: 0 00:35:46.107 Compare Command: Not Supported 00:35:46.107 Write Uncorrectable Command: Not Supported 00:35:46.107 Dataset Management Command: Not Supported 00:35:46.107 Write Zeroes Command: Not Supported 00:35:46.107 Set Features Save Field: Not Supported 00:35:46.107 Reservations: Not Supported 00:35:46.107 Timestamp: Not Supported 00:35:46.107 Copy: Not Supported 00:35:46.107 Volatile Write Cache: Not Present 00:35:46.107 Atomic Write Unit (Normal): 1 00:35:46.107 Atomic Write Unit (PFail): 1 00:35:46.107 Atomic Compare & Write Unit: 1 00:35:46.107 Fused Compare & Write: Not Supported 00:35:46.107 Scatter-Gather List 00:35:46.107 SGL Command Set: Supported 00:35:46.107 SGL Keyed: Supported 00:35:46.107 SGL Bit Bucket Descriptor: Not Supported 00:35:46.107 SGL Metadata Pointer: Not Supported 00:35:46.107 Oversized SGL: Not Supported 00:35:46.107 SGL Metadata Address: Not Supported 00:35:46.107 SGL Offset: Supported 00:35:46.107 Transport SGL Data Block: Not Supported 00:35:46.107 Replay Protected Memory Block: Not Supported 00:35:46.107 00:35:46.107 Firmware Slot Information 00:35:46.107 ========================= 00:35:46.107 Active slot: 0 00:35:46.107 00:35:46.107 00:35:46.107 Error Log 00:35:46.107 ========= 00:35:46.107 00:35:46.107 Active Namespaces 00:35:46.107 ================= 00:35:46.107 Discovery Log Page 00:35:46.107 ================== 00:35:46.107 Generation Counter: 2 00:35:46.107 Number of Records: 2 00:35:46.107 Record Format: 0 00:35:46.107 00:35:46.107 Discovery Log Entry 0 00:35:46.107 ---------------------- 00:35:46.107 Transport Type: 1 (RDMA) 00:35:46.107 Address Family: 1 (IPv4) 00:35:46.107 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:46.107 Entry Flags: 00:35:46.107 Duplicate Returned Information: 0 00:35:46.107 Explicit Persistent Connection Support for Discovery: 0 00:35:46.107 Transport Requirements: 00:35:46.107 Secure Channel: Not Specified 00:35:46.107 Port ID: 1 (0x0001) 00:35:46.107 Controller ID: 65535 (0xffff) 00:35:46.107 Admin Max SQ Size: 32 00:35:46.107 Transport Service Identifier: 4420 00:35:46.107 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:46.107 Transport Address: 192.168.100.8 00:35:46.107 Transport Specific Address Subtype - RDMA 00:35:46.107 RDMA QP Service Type: 1 (Reliable Connected) 00:35:46.107 RDMA Provider Type: 1 (No provider specified) 00:35:46.107 RDMA CM Service: 1 (RDMA_CM) 00:35:46.107 Discovery Log Entry 1 00:35:46.107 ---------------------- 00:35:46.107 Transport Type: 1 (RDMA) 00:35:46.107 Address Family: 1 (IPv4) 00:35:46.107 Subsystem Type: 2 (NVM Subsystem) 00:35:46.107 Entry Flags: 00:35:46.107 Duplicate Returned Information: 0 00:35:46.107 Explicit Persistent Connection Support for Discovery: 0 00:35:46.107 Transport Requirements: 00:35:46.107 Secure Channel: Not Specified 00:35:46.107 Port ID: 1 (0x0001) 00:35:46.107 Controller ID: 65535 (0xffff) 00:35:46.107 Admin Max SQ Size: 32 00:35:46.107 Transport Service Identifier: 4420 00:35:46.107 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:46.107 Transport Address: 192.168.100.8 00:35:46.107 Transport Specific Address Subtype - RDMA 00:35:46.107 RDMA QP Service Type: 1 (Reliable Connected) 00:35:46.107 RDMA Provider Type: 1 (No provider specified) 00:35:46.107 RDMA CM Service: 1 (RDMA_CM) 00:35:46.107 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:46.367 get_feature(0x01) failed 00:35:46.367 get_feature(0x02) failed 00:35:46.367 get_feature(0x04) failed 00:35:46.367 ===================================================== 00:35:46.367 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:35:46.367 ===================================================== 00:35:46.367 Controller Capabilities/Features 00:35:46.367 ================================ 00:35:46.367 Vendor ID: 0000 00:35:46.367 Subsystem Vendor ID: 0000 00:35:46.367 Serial Number: d79624fbfc371d6b25b8 00:35:46.367 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:46.367 Firmware Version: 6.8.9-20 00:35:46.367 Recommended Arb Burst: 6 00:35:46.367 IEEE OUI Identifier: 00 00 00 00:35:46.367 Multi-path I/O 00:35:46.367 May have multiple subsystem ports: Yes 00:35:46.367 May have multiple controllers: Yes 00:35:46.367 Associated with SR-IOV VF: No 00:35:46.367 Max Data Transfer Size: 1048576 00:35:46.367 Max Number of Namespaces: 1024 00:35:46.367 Max Number of I/O Queues: 128 00:35:46.367 NVMe Specification Version (VS): 1.3 00:35:46.367 NVMe Specification Version (Identify): 1.3 00:35:46.367 Maximum Queue Entries: 128 00:35:46.367 Contiguous Queues Required: No 00:35:46.367 Arbitration Mechanisms Supported 00:35:46.367 Weighted Round Robin: Not Supported 00:35:46.367 Vendor Specific: Not Supported 00:35:46.367 Reset Timeout: 7500 ms 00:35:46.367 Doorbell Stride: 4 bytes 00:35:46.367 NVM Subsystem Reset: Not Supported 00:35:46.367 Command Sets Supported 00:35:46.367 NVM Command Set: Supported 00:35:46.367 Boot Partition: Not Supported 00:35:46.367 Memory Page Size Minimum: 4096 bytes 00:35:46.367 Memory Page Size Maximum: 4096 bytes 00:35:46.367 Persistent Memory Region: Not Supported 00:35:46.367 Optional Asynchronous Events Supported 00:35:46.367 Namespace Attribute Notices: Supported 00:35:46.367 Firmware Activation Notices: Not Supported 00:35:46.367 ANA Change Notices: Supported 00:35:46.367 PLE Aggregate Log Change Notices: Not Supported 00:35:46.367 LBA Status Info Alert Notices: Not Supported 00:35:46.367 EGE Aggregate Log Change Notices: Not Supported 00:35:46.367 Normal NVM Subsystem Shutdown event: Not Supported 00:35:46.367 Zone Descriptor Change Notices: Not Supported 00:35:46.367 Discovery Log Change Notices: Not Supported 00:35:46.367 Controller Attributes 00:35:46.367 128-bit Host Identifier: Supported 00:35:46.367 Non-Operational Permissive Mode: Not Supported 00:35:46.367 NVM Sets: Not Supported 00:35:46.367 Read Recovery Levels: Not Supported 00:35:46.367 Endurance Groups: Not Supported 00:35:46.367 Predictable Latency Mode: Not Supported 00:35:46.367 Traffic Based Keep ALive: Supported 00:35:46.367 Namespace Granularity: Not Supported 00:35:46.367 SQ Associations: Not Supported 00:35:46.367 UUID List: Not Supported 00:35:46.367 Multi-Domain Subsystem: Not Supported 00:35:46.367 Fixed Capacity Management: Not Supported 00:35:46.367 Variable Capacity Management: Not Supported 00:35:46.367 Delete Endurance Group: Not Supported 00:35:46.367 Delete NVM Set: Not Supported 00:35:46.367 Extended LBA Formats Supported: Not Supported 00:35:46.367 Flexible Data Placement Supported: Not Supported 00:35:46.367 00:35:46.367 Controller Memory Buffer Support 00:35:46.367 ================================ 00:35:46.367 Supported: No 00:35:46.367 00:35:46.367 Persistent Memory Region Support 00:35:46.367 ================================ 00:35:46.367 Supported: No 00:35:46.367 00:35:46.367 Admin Command Set Attributes 00:35:46.367 ============================ 00:35:46.367 Security Send/Receive: Not Supported 00:35:46.367 Format NVM: Not Supported 00:35:46.367 Firmware Activate/Download: Not Supported 00:35:46.367 Namespace Management: Not Supported 00:35:46.367 Device Self-Test: Not Supported 00:35:46.367 Directives: Not Supported 00:35:46.367 NVMe-MI: Not Supported 00:35:46.367 Virtualization Management: Not Supported 00:35:46.367 Doorbell Buffer Config: Not Supported 00:35:46.367 Get LBA Status Capability: Not Supported 00:35:46.367 Command & Feature Lockdown Capability: Not Supported 00:35:46.367 Abort Command Limit: 4 00:35:46.367 Async Event Request Limit: 4 00:35:46.367 Number of Firmware Slots: N/A 00:35:46.367 Firmware Slot 1 Read-Only: N/A 00:35:46.367 Firmware Activation Without Reset: N/A 00:35:46.367 Multiple Update Detection Support: N/A 00:35:46.367 Firmware Update Granularity: No Information Provided 00:35:46.367 Per-Namespace SMART Log: Yes 00:35:46.367 Asymmetric Namespace Access Log Page: Supported 00:35:46.367 ANA Transition Time : 10 sec 00:35:46.367 00:35:46.367 Asymmetric Namespace Access Capabilities 00:35:46.367 ANA Optimized State : Supported 00:35:46.367 ANA Non-Optimized State : Supported 00:35:46.367 ANA Inaccessible State : Supported 00:35:46.367 ANA Persistent Loss State : Supported 00:35:46.367 ANA Change State : Supported 00:35:46.367 ANAGRPID is not changed : No 00:35:46.367 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:46.367 00:35:46.367 ANA Group Identifier Maximum : 128 00:35:46.367 Number of ANA Group Identifiers : 128 00:35:46.367 Max Number of Allowed Namespaces : 1024 00:35:46.367 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:46.367 Command Effects Log Page: Supported 00:35:46.367 Get Log Page Extended Data: Supported 00:35:46.367 Telemetry Log Pages: Not Supported 00:35:46.367 Persistent Event Log Pages: Not Supported 00:35:46.367 Supported Log Pages Log Page: May Support 00:35:46.367 Commands Supported & Effects Log Page: Not Supported 00:35:46.367 Feature Identifiers & Effects Log Page:May Support 00:35:46.367 NVMe-MI Commands & Effects Log Page: May Support 00:35:46.367 Data Area 4 for Telemetry Log: Not Supported 00:35:46.367 Error Log Page Entries Supported: 128 00:35:46.367 Keep Alive: Supported 00:35:46.367 Keep Alive Granularity: 1000 ms 00:35:46.367 00:35:46.367 NVM Command Set Attributes 00:35:46.367 ========================== 00:35:46.367 Submission Queue Entry Size 00:35:46.367 Max: 64 00:35:46.367 Min: 64 00:35:46.367 Completion Queue Entry Size 00:35:46.367 Max: 16 00:35:46.367 Min: 16 00:35:46.367 Number of Namespaces: 1024 00:35:46.367 Compare Command: Not Supported 00:35:46.367 Write Uncorrectable Command: Not Supported 00:35:46.367 Dataset Management Command: Supported 00:35:46.367 Write Zeroes Command: Supported 00:35:46.367 Set Features Save Field: Not Supported 00:35:46.367 Reservations: Not Supported 00:35:46.367 Timestamp: Not Supported 00:35:46.367 Copy: Not Supported 00:35:46.367 Volatile Write Cache: Present 00:35:46.367 Atomic Write Unit (Normal): 1 00:35:46.367 Atomic Write Unit (PFail): 1 00:35:46.367 Atomic Compare & Write Unit: 1 00:35:46.367 Fused Compare & Write: Not Supported 00:35:46.367 Scatter-Gather List 00:35:46.367 SGL Command Set: Supported 00:35:46.367 SGL Keyed: Supported 00:35:46.367 SGL Bit Bucket Descriptor: Not Supported 00:35:46.367 SGL Metadata Pointer: Not Supported 00:35:46.367 Oversized SGL: Not Supported 00:35:46.367 SGL Metadata Address: Not Supported 00:35:46.367 SGL Offset: Supported 00:35:46.367 Transport SGL Data Block: Not Supported 00:35:46.367 Replay Protected Memory Block: Not Supported 00:35:46.367 00:35:46.367 Firmware Slot Information 00:35:46.367 ========================= 00:35:46.367 Active slot: 0 00:35:46.367 00:35:46.367 Asymmetric Namespace Access 00:35:46.367 =========================== 00:35:46.367 Change Count : 0 00:35:46.367 Number of ANA Group Descriptors : 1 00:35:46.367 ANA Group Descriptor : 0 00:35:46.367 ANA Group ID : 1 00:35:46.367 Number of NSID Values : 1 00:35:46.367 Change Count : 0 00:35:46.367 ANA State : 1 00:35:46.367 Namespace Identifier : 1 00:35:46.367 00:35:46.367 Commands Supported and Effects 00:35:46.367 ============================== 00:35:46.367 Admin Commands 00:35:46.367 -------------- 00:35:46.367 Get Log Page (02h): Supported 00:35:46.367 Identify (06h): Supported 00:35:46.367 Abort (08h): Supported 00:35:46.367 Set Features (09h): Supported 00:35:46.367 Get Features (0Ah): Supported 00:35:46.367 Asynchronous Event Request (0Ch): Supported 00:35:46.367 Keep Alive (18h): Supported 00:35:46.367 I/O Commands 00:35:46.367 ------------ 00:35:46.367 Flush (00h): Supported 00:35:46.367 Write (01h): Supported LBA-Change 00:35:46.367 Read (02h): Supported 00:35:46.367 Write Zeroes (08h): Supported LBA-Change 00:35:46.367 Dataset Management (09h): Supported 00:35:46.367 00:35:46.367 Error Log 00:35:46.367 ========= 00:35:46.367 Entry: 0 00:35:46.367 Error Count: 0x3 00:35:46.368 Submission Queue Id: 0x0 00:35:46.368 Command Id: 0x5 00:35:46.368 Phase Bit: 0 00:35:46.368 Status Code: 0x2 00:35:46.368 Status Code Type: 0x0 00:35:46.368 Do Not Retry: 1 00:35:46.368 Error Location: 0x28 00:35:46.368 LBA: 0x0 00:35:46.368 Namespace: 0x0 00:35:46.368 Vendor Log Page: 0x0 00:35:46.368 ----------- 00:35:46.368 Entry: 1 00:35:46.368 Error Count: 0x2 00:35:46.368 Submission Queue Id: 0x0 00:35:46.368 Command Id: 0x5 00:35:46.368 Phase Bit: 0 00:35:46.368 Status Code: 0x2 00:35:46.368 Status Code Type: 0x0 00:35:46.368 Do Not Retry: 1 00:35:46.368 Error Location: 0x28 00:35:46.368 LBA: 0x0 00:35:46.368 Namespace: 0x0 00:35:46.368 Vendor Log Page: 0x0 00:35:46.368 ----------- 00:35:46.368 Entry: 2 00:35:46.368 Error Count: 0x1 00:35:46.368 Submission Queue Id: 0x0 00:35:46.368 Command Id: 0x0 00:35:46.368 Phase Bit: 0 00:35:46.368 Status Code: 0x2 00:35:46.368 Status Code Type: 0x0 00:35:46.368 Do Not Retry: 1 00:35:46.368 Error Location: 0x28 00:35:46.368 LBA: 0x0 00:35:46.368 Namespace: 0x0 00:35:46.368 Vendor Log Page: 0x0 00:35:46.368 00:35:46.368 Number of Queues 00:35:46.368 ================ 00:35:46.368 Number of I/O Submission Queues: 128 00:35:46.368 Number of I/O Completion Queues: 128 00:35:46.368 00:35:46.368 ZNS Specific Controller Data 00:35:46.368 ============================ 00:35:46.368 Zone Append Size Limit: 0 00:35:46.368 00:35:46.368 00:35:46.368 Active Namespaces 00:35:46.368 ================= 00:35:46.368 get_feature(0x05) failed 00:35:46.368 Namespace ID:1 00:35:46.368 Command Set Identifier: NVM (00h) 00:35:46.368 Deallocate: Supported 00:35:46.368 Deallocated/Unwritten Error: Not Supported 00:35:46.368 Deallocated Read Value: Unknown 00:35:46.368 Deallocate in Write Zeroes: Not Supported 00:35:46.368 Deallocated Guard Field: 0xFFFF 00:35:46.368 Flush: Supported 00:35:46.368 Reservation: Not Supported 00:35:46.368 Namespace Sharing Capabilities: Multiple Controllers 00:35:46.368 Size (in LBAs): 3907029168 (1863GiB) 00:35:46.368 Capacity (in LBAs): 3907029168 (1863GiB) 00:35:46.368 Utilization (in LBAs): 3907029168 (1863GiB) 00:35:46.368 UUID: 91b124e7-aa55-43c6-970d-34b75aec39a1 00:35:46.368 Thin Provisioning: Not Supported 00:35:46.368 Per-NS Atomic Units: Yes 00:35:46.368 Atomic Boundary Size (Normal): 0 00:35:46.368 Atomic Boundary Size (PFail): 0 00:35:46.368 Atomic Boundary Offset: 0 00:35:46.368 NGUID/EUI64 Never Reused: No 00:35:46.368 ANA group ID: 1 00:35:46.368 Namespace Write Protected: No 00:35:46.368 Number of LBA Formats: 1 00:35:46.368 Current LBA Format: LBA Format #00 00:35:46.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:46.368 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:46.368 rmmod nvme_rdma 00:35:46.368 rmmod nvme_fabrics 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:35:46.368 05:52:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:35:49.654 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:49.654 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:49.654 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:49.654 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:49.914 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:51.819 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:35:52.078 00:35:52.078 real 0m19.890s 00:35:52.078 user 0m5.111s 00:35:52.078 sys 0m11.926s 00:35:52.078 05:52:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.078 05:52:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:52.078 ************************************ 00:35:52.078 END TEST nvmf_identify_kernel_target 00:35:52.078 ************************************ 00:35:52.078 05:52:48 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:35:52.078 05:52:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:52.078 05:52:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.078 05:52:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.078 ************************************ 00:35:52.078 START TEST nvmf_auth_host 00:35:52.078 ************************************ 00:35:52.078 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:35:52.078 * Looking for test storage... 00:35:52.337 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:52.337 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:52.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.337 --rc genhtml_branch_coverage=1 00:35:52.337 --rc genhtml_function_coverage=1 00:35:52.337 --rc genhtml_legend=1 00:35:52.337 --rc geninfo_all_blocks=1 00:35:52.337 --rc geninfo_unexecuted_blocks=1 00:35:52.337 00:35:52.338 ' 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:52.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.338 --rc genhtml_branch_coverage=1 00:35:52.338 --rc genhtml_function_coverage=1 00:35:52.338 --rc genhtml_legend=1 00:35:52.338 --rc geninfo_all_blocks=1 00:35:52.338 --rc geninfo_unexecuted_blocks=1 00:35:52.338 00:35:52.338 ' 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:52.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.338 --rc genhtml_branch_coverage=1 00:35:52.338 --rc genhtml_function_coverage=1 00:35:52.338 --rc genhtml_legend=1 00:35:52.338 --rc geninfo_all_blocks=1 00:35:52.338 --rc geninfo_unexecuted_blocks=1 00:35:52.338 00:35:52.338 ' 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:52.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.338 --rc genhtml_branch_coverage=1 00:35:52.338 --rc genhtml_function_coverage=1 00:35:52.338 --rc genhtml_legend=1 00:35:52.338 --rc geninfo_all_blocks=1 00:35:52.338 --rc geninfo_unexecuted_blocks=1 00:35:52.338 00:35:52.338 ' 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:52.338 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.338 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:52.339 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:52.339 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:52.339 05:52:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:36:00.460 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:36:00.460 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:36:00.460 Found net devices under 0000:d9:00.0: mlx_0_0 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:36:00.460 Found net devices under 0000:d9:00.1: mlx_0_1 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:36:00.460 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:36:00.461 05:52:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:36:00.461 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:00.461 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:36:00.461 altname enp217s0f0np0 00:36:00.461 altname ens818f0np0 00:36:00.461 inet 192.168.100.8/24 scope global mlx_0_0 00:36:00.461 valid_lft forever preferred_lft forever 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:36:00.461 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:00.461 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:36:00.461 altname enp217s0f1np1 00:36:00.461 altname ens818f1np1 00:36:00.461 inet 192.168.100.9/24 scope global mlx_0_1 00:36:00.461 valid_lft forever preferred_lft forever 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:00.461 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:36:00.721 192.168.100.9' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:36:00.721 192.168.100.9' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:36:00.721 192.168.100.9' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3572072 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3572072 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3572072 ']' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.721 05:52:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e082cf5ac64fe3adea4764d5eeab67a9 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.W9f 00:36:01.659 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e082cf5ac64fe3adea4764d5eeab67a9 0 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e082cf5ac64fe3adea4764d5eeab67a9 0 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e082cf5ac64fe3adea4764d5eeab67a9 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.W9f 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.W9f 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.W9f 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9c4b4f55888194fa17c83dba1f33a87cc13b5fff1eac089a40378ddfd99d00cc 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.m8M 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9c4b4f55888194fa17c83dba1f33a87cc13b5fff1eac089a40378ddfd99d00cc 3 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9c4b4f55888194fa17c83dba1f33a87cc13b5fff1eac089a40378ddfd99d00cc 3 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9c4b4f55888194fa17c83dba1f33a87cc13b5fff1eac089a40378ddfd99d00cc 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.m8M 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.m8M 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.m8M 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6b0c1d5da88f121f9372e3d00cdc699ed6fe1106ed4e21a9 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2BB 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6b0c1d5da88f121f9372e3d00cdc699ed6fe1106ed4e21a9 0 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6b0c1d5da88f121f9372e3d00cdc699ed6fe1106ed4e21a9 0 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6b0c1d5da88f121f9372e3d00cdc699ed6fe1106ed4e21a9 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:01.660 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:01.919 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2BB 00:36:01.919 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2BB 00:36:01.919 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.2BB 00:36:01.919 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:01.919 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=457c90ae402ca937bf5849ad53613094a0c2c94d9f51b5dd 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Qh1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 457c90ae402ca937bf5849ad53613094a0c2c94d9f51b5dd 2 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 457c90ae402ca937bf5849ad53613094a0c2c94d9f51b5dd 2 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=457c90ae402ca937bf5849ad53613094a0c2c94d9f51b5dd 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Qh1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Qh1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Qh1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4d230fe3e000a2377db310fd2254efbd 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.H8T 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4d230fe3e000a2377db310fd2254efbd 1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4d230fe3e000a2377db310fd2254efbd 1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4d230fe3e000a2377db310fd2254efbd 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.H8T 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.H8T 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.H8T 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1bc357a4e6588d7b38fcf0469fc344ec 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2Pm 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1bc357a4e6588d7b38fcf0469fc344ec 1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1bc357a4e6588d7b38fcf0469fc344ec 1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1bc357a4e6588d7b38fcf0469fc344ec 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2Pm 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2Pm 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.2Pm 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ff76204c6b10eee29162e6c0320808839674c9c2a65c90ba 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yaH 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ff76204c6b10eee29162e6c0320808839674c9c2a65c90ba 2 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ff76204c6b10eee29162e6c0320808839674c9c2a65c90ba 2 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ff76204c6b10eee29162e6c0320808839674c9c2a65c90ba 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:01.920 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yaH 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yaH 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.yaH 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=60c4efd281d1f024c1f7bc15f5d020bc 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rZ3 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 60c4efd281d1f024c1f7bc15f5d020bc 0 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 60c4efd281d1f024c1f7bc15f5d020bc 0 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=60c4efd281d1f024c1f7bc15f5d020bc 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rZ3 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rZ3 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rZ3 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=279fd67a9326344fc0a3a0cd26d17b73ac78c20bed7077b34670bf764ebf3847 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hh1 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 279fd67a9326344fc0a3a0cd26d17b73ac78c20bed7077b34670bf764ebf3847 3 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 279fd67a9326344fc0a3a0cd26d17b73ac78c20bed7077b34670bf764ebf3847 3 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=279fd67a9326344fc0a3a0cd26d17b73ac78c20bed7077b34670bf764ebf3847 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hh1 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hh1 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hh1 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3572072 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3572072 ']' 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.180 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.W9f 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.m8M ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m8M 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.2BB 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Qh1 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qh1 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.H8T 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.2Pm ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2Pm 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.yaH 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rZ3 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rZ3 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hh1 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:36:02.440 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:36:02.441 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:02.441 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:02.441 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:02.441 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:02.441 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:36:02.441 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:02.441 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:02.441 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:02.441 05:52:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:36:06.631 Waiting for block devices as requested 00:36:06.631 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:06.631 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:06.631 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:06.631 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:06.631 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:06.890 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:06.890 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:06.890 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:06.890 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:07.149 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:07.149 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:07.149 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:07.407 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:07.407 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:07.407 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:07.666 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:07.666 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:08.604 No valid GPT data, bailing 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:36:08.604 05:53:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:36:08.604 00:36:08.604 Discovery Log Number of Records 2, Generation counter 2 00:36:08.604 =====Discovery Log Entry 0====== 00:36:08.604 trtype: rdma 00:36:08.604 adrfam: ipv4 00:36:08.604 subtype: current discovery subsystem 00:36:08.604 treq: not specified, sq flow control disable supported 00:36:08.604 portid: 1 00:36:08.604 trsvcid: 4420 00:36:08.604 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:08.604 traddr: 192.168.100.8 00:36:08.604 eflags: none 00:36:08.604 rdma_prtype: not specified 00:36:08.604 rdma_qptype: connected 00:36:08.604 rdma_cms: rdma-cm 00:36:08.604 rdma_pkey: 0x0000 00:36:08.604 =====Discovery Log Entry 1====== 00:36:08.604 trtype: rdma 00:36:08.604 adrfam: ipv4 00:36:08.604 subtype: nvme subsystem 00:36:08.604 treq: not specified, sq flow control disable supported 00:36:08.604 portid: 1 00:36:08.604 trsvcid: 4420 00:36:08.604 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:08.604 traddr: 192.168.100.8 00:36:08.604 eflags: none 00:36:08.604 rdma_prtype: not specified 00:36:08.604 rdma_qptype: connected 00:36:08.604 rdma_cms: rdma-cm 00:36:08.604 rdma_pkey: 0x0000 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.604 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.605 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.864 nvme0n1 00:36:08.864 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.864 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.864 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.864 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.864 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.864 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.864 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.864 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.864 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.864 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.123 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.123 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:09.123 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:09.123 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.123 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:09.123 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.123 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.123 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:09.123 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:09.123 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.124 nvme0n1 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.124 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.383 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.383 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.383 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.383 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.384 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.646 nvme0n1 00:36:09.646 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.646 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.646 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.646 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.647 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.647 05:53:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.647 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.033 nvme0n1 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.033 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.327 nvme0n1 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.327 nvme0n1 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.327 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.585 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.586 05:53:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.844 nvme0n1 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.844 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.103 nvme0n1 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.103 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.362 nvme0n1 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.362 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.620 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:11.620 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:11.620 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:11.620 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:11.620 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:11.620 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:11.620 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.620 05:53:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.620 nvme0n1 00:36:11.620 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.620 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.620 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.620 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.620 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.620 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.879 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.138 nvme0n1 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:12.138 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:12.139 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.139 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.397 nvme0n1 00:36:12.397 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.397 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.397 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.397 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.397 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.397 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.397 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.397 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.397 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.397 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.655 05:53:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.655 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.912 nvme0n1 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.912 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.170 nvme0n1 00:36:13.170 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.170 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.170 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.170 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.170 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.170 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.170 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.170 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.170 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.170 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.428 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.429 05:53:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.686 nvme0n1 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.687 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.945 nvme0n1 00:36:13.945 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.945 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.945 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.945 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.945 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.945 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.204 05:53:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.463 nvme0n1 00:36:14.463 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.463 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.463 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.463 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.463 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.463 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:14.721 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:14.722 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:14.722 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.722 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.979 nvme0n1 00:36:14.979 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.979 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.979 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.979 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.979 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.238 05:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.496 nvme0n1 00:36:15.496 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.496 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.496 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.496 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.496 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.754 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.754 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.755 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.014 nvme0n1 00:36:16.014 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.014 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.014 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.014 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.014 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.014 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.272 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.272 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.272 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.272 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.272 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.272 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.272 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:16.272 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.273 05:53:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.531 nvme0n1 00:36:16.531 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.531 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.531 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.531 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.531 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.531 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.791 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.358 nvme0n1 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.358 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:17.359 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:17.359 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:17.359 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:17.359 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:17.359 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:17.359 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.359 05:53:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.293 nvme0n1 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:18.293 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.294 05:53:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.861 nvme0n1 00:36:18.861 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.861 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.861 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.862 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.429 nvme0n1 00:36:19.429 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.429 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.429 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.429 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.429 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.429 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.429 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.429 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.429 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.429 05:53:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.688 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.255 nvme0n1 00:36:20.255 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.255 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.255 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.255 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.256 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.515 nvme0n1 00:36:20.515 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.515 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.515 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.515 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.515 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.515 05:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.515 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.774 nvme0n1 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.774 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.033 nvme0n1 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.033 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.034 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.034 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.034 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.034 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.291 nvme0n1 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.291 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.549 05:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.808 nvme0n1 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.808 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.067 nvme0n1 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.067 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.326 nvme0n1 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:22.326 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:22.327 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:22.327 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:22.585 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.585 05:53:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.585 nvme0n1 00:36:22.585 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.585 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.585 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.585 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.585 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.585 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.844 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.103 nvme0n1 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.103 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.361 nvme0n1 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.361 05:53:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.619 nvme0n1 00:36:23.619 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.619 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.619 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.619 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.619 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.619 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.619 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.619 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.619 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.619 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.878 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.136 nvme0n1 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:24.136 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:24.137 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:24.137 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:24.137 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:24.137 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:24.137 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.137 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.395 nvme0n1 00:36:24.395 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.395 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.395 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.395 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.395 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.395 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.653 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.653 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.653 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.653 05:53:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.653 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.912 nvme0n1 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.912 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.170 nvme0n1 00:36:25.170 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.170 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.170 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.170 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.171 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.429 05:53:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.997 nvme0n1 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.997 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.255 nvme0n1 00:36:26.255 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.255 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.255 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.255 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.255 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.256 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.514 05:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.773 nvme0n1 00:36:26.773 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.773 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.773 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.773 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.773 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.773 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:27.031 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.032 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.290 nvme0n1 00:36:27.290 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.290 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.290 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.290 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.290 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.290 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.290 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.290 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.290 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.290 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:27.548 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.549 05:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.807 nvme0n1 00:36:27.807 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.807 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.807 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.807 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.807 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.807 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.066 05:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.634 nvme0n1 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:28.634 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.635 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.571 nvme0n1 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.571 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.572 05:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.140 nvme0n1 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.140 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.141 05:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.711 nvme0n1 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:30.711 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.970 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:30.970 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:30.970 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:30.970 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.971 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.540 nvme0n1 00:36:31.540 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.540 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.540 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.540 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.540 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.540 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.540 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.540 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.540 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.540 05:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.540 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.541 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:31.541 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:31.541 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:31.541 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:31.541 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:31.541 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:31.541 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.541 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.800 nvme0n1 00:36:31.800 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.800 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.801 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.060 nvme0n1 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:32.060 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.061 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.319 nvme0n1 00:36:32.319 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.319 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.319 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.319 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.319 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.319 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.319 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.319 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.319 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.319 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.578 05:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.578 nvme0n1 00:36:32.578 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.578 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.578 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.578 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.578 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.578 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.843 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.843 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.843 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.843 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.843 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.844 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.102 nvme0n1 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.102 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.360 nvme0n1 00:36:33.360 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.360 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.360 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.360 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.360 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.360 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.360 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.360 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.360 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.361 05:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.619 nvme0n1 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.619 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.878 nvme0n1 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.878 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.137 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.138 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.397 nvme0n1 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:34.397 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:34.398 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:34.398 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.398 05:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.655 nvme0n1 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:34.655 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.656 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.914 nvme0n1 00:36:34.914 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.914 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.914 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.914 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.914 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.914 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.914 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.914 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.914 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.914 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.172 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.172 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.172 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:35.172 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.172 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.172 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.172 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:35.172 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:35.172 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:35.172 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.173 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.432 nvme0n1 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.432 05:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.690 nvme0n1 00:36:35.690 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.690 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.690 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.690 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.690 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.690 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.947 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.948 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.206 nvme0n1 00:36:36.206 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.206 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.206 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.206 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.206 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.206 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.206 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.207 05:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.465 nvme0n1 00:36:36.465 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.465 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.465 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.465 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.465 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.723 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.981 nvme0n1 00:36:36.981 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.981 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.981 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.981 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.981 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.239 05:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.497 nvme0n1 00:36:37.497 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.497 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.497 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.497 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.497 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.756 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.015 nvme0n1 00:36:38.015 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.015 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.015 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.015 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.015 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.274 05:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.532 nvme0n1 00:36:38.533 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.533 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.533 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.533 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.533 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:38.791 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.792 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.051 nvme0n1 00:36:39.051 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.051 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.051 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.051 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.051 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4MmNmNWFjNjRmZTNhZGVhNDc2NGQ1ZWVhYjY3YTm4WJe4: 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: ]] 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0YjRmNTU4ODgxOTRmYTE3YzgzZGJhMWYzM2E4N2NjMTNiNWZmZjFlYWMwODlhNDAzNzhkZGZkOTlkMDBjYylvhdg=: 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.310 05:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.877 nvme0n1 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.877 05:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.813 nvme0n1 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:40.813 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.814 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.382 nvme0n1 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY3NjIwNGM2YjEwZWVlMjkxNjJlNmMwMzIwODA4ODM5Njc0YzljMmE2NWM5MGJhMYuAUw==: 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: ]] 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBjNGVmZDI4MWQxZjAyNGMxZjdiYzE1ZjVkMDIwYmPNK4VR: 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.382 05:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.950 nvme0n1 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5ZmQ2N2E5MzI2MzQ0ZmMwYTNhMGNkMjZkMTdiNzNhYzc4YzIwYmVkNzA3N2IzNDY3MGJmNzY0ZWJmMzg0N3l6ljs=: 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.950 05:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.892 nvme0n1 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.892 request: 00:36:42.892 { 00:36:42.892 "name": "nvme0", 00:36:42.892 "trtype": "rdma", 00:36:42.892 "traddr": "192.168.100.8", 00:36:42.892 "adrfam": "ipv4", 00:36:42.892 "trsvcid": "4420", 00:36:42.892 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:42.892 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:42.892 "prchk_reftag": false, 00:36:42.892 "prchk_guard": false, 00:36:42.892 "hdgst": false, 00:36:42.892 "ddgst": false, 00:36:42.892 "allow_unrecognized_csi": false, 00:36:42.892 "method": "bdev_nvme_attach_controller", 00:36:42.892 "req_id": 1 00:36:42.892 } 00:36:42.892 Got JSON-RPC error response 00:36:42.892 response: 00:36:42.892 { 00:36:42.892 "code": -5, 00:36:42.892 "message": "Input/output error" 00:36:42.892 } 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.892 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.893 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.152 request: 00:36:43.152 { 00:36:43.152 "name": "nvme0", 00:36:43.152 "trtype": "rdma", 00:36:43.152 "traddr": "192.168.100.8", 00:36:43.152 "adrfam": "ipv4", 00:36:43.152 "trsvcid": "4420", 00:36:43.152 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:43.152 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:43.152 "prchk_reftag": false, 00:36:43.152 "prchk_guard": false, 00:36:43.152 "hdgst": false, 00:36:43.152 "ddgst": false, 00:36:43.152 "dhchap_key": "key2", 00:36:43.152 "allow_unrecognized_csi": false, 00:36:43.152 "method": "bdev_nvme_attach_controller", 00:36:43.152 "req_id": 1 00:36:43.152 } 00:36:43.152 Got JSON-RPC error response 00:36:43.152 response: 00:36:43.152 { 00:36:43.152 "code": -5, 00:36:43.152 "message": "Input/output error" 00:36:43.152 } 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:43.152 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.153 request: 00:36:43.153 { 00:36:43.153 "name": "nvme0", 00:36:43.153 "trtype": "rdma", 00:36:43.153 "traddr": "192.168.100.8", 00:36:43.153 "adrfam": "ipv4", 00:36:43.153 "trsvcid": "4420", 00:36:43.153 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:43.153 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:43.153 "prchk_reftag": false, 00:36:43.153 "prchk_guard": false, 00:36:43.153 "hdgst": false, 00:36:43.153 "ddgst": false, 00:36:43.153 "dhchap_key": "key1", 00:36:43.153 "dhchap_ctrlr_key": "ckey2", 00:36:43.153 "allow_unrecognized_csi": false, 00:36:43.153 "method": "bdev_nvme_attach_controller", 00:36:43.153 "req_id": 1 00:36:43.153 } 00:36:43.153 Got JSON-RPC error response 00:36:43.153 response: 00:36:43.153 { 00:36:43.153 "code": -5, 00:36:43.153 "message": "Input/output error" 00:36:43.153 } 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.153 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.412 nvme0n1 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.412 05:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.672 request: 00:36:43.672 { 00:36:43.672 "name": "nvme0", 00:36:43.672 "dhchap_key": "key1", 00:36:43.672 "dhchap_ctrlr_key": "ckey2", 00:36:43.672 "method": "bdev_nvme_set_keys", 00:36:43.672 "req_id": 1 00:36:43.672 } 00:36:43.672 Got JSON-RPC error response 00:36:43.672 response: 00:36:43.672 { 00:36:43.672 "code": -13, 00:36:43.672 "message": "Permission denied" 00:36:43.672 } 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:43.672 05:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:44.608 05:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.608 05:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:44.609 05:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.609 05:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.609 05:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.609 05:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:44.609 05:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:45.984 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.984 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:45.984 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.984 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.984 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.984 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIwYzFkNWRhODhmMTIxZjkzNzJlM2QwMGNkYzY5OWVkNmZlMTEwNmVkNGUyMWE5Z+lzEQ==: 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: ]] 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU3YzkwYWU0MDJjYTkzN2JmNTg0OWFkNTM2MTMwOTRhMGMyYzk0ZDlmNTFiNWRkIQOOAQ==: 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.985 nvme0n1 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQyMzBmZTNlMDAwYTIzNzdkYjMxMGZkMjI1NGVmYmTFq3s9: 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: ]] 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJjMzU3YTRlNjU4OGQ3YjM4ZmNmMDQ2OWZjMzQ0ZWOPpAnv: 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.985 request: 00:36:45.985 { 00:36:45.985 "name": "nvme0", 00:36:45.985 "dhchap_key": "key2", 00:36:45.985 "dhchap_ctrlr_key": "ckey1", 00:36:45.985 "method": "bdev_nvme_set_keys", 00:36:45.985 "req_id": 1 00:36:45.985 } 00:36:45.985 Got JSON-RPC error response 00:36:45.985 response: 00:36:45.985 { 00:36:45.985 "code": -13, 00:36:45.985 "message": "Permission denied" 00:36:45.985 } 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:45.985 05:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:47.358 05:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.358 05:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:47.358 05:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.358 05:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.358 05:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.358 05:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:47.358 05:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:36:48.295 rmmod nvme_rdma 00:36:48.295 rmmod nvme_fabrics 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3572072 ']' 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3572072 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3572072 ']' 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3572072 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3572072 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3572072' 00:36:48.295 killing process with pid 3572072 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3572072 00:36:48.295 05:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3572072 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:36:49.232 05:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:36:53.424 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:53.424 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:55.330 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:36:55.589 05:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.W9f /tmp/spdk.key-null.2BB /tmp/spdk.key-sha256.H8T /tmp/spdk.key-sha384.yaH /tmp/spdk.key-sha512.hh1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:36:55.589 05:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:36:59.783 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:59.783 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:59.783 00:36:59.783 real 1m7.142s 00:36:59.783 user 0m58.617s 00:36:59.783 sys 0m18.684s 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.783 ************************************ 00:36:59.783 END TEST nvmf_auth_host 00:36:59.783 ************************************ 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.783 ************************************ 00:36:59.783 START TEST nvmf_bdevperf 00:36:59.783 ************************************ 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:36:59.783 * Looking for test storage... 00:36:59.783 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:59.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.783 --rc genhtml_branch_coverage=1 00:36:59.783 --rc genhtml_function_coverage=1 00:36:59.783 --rc genhtml_legend=1 00:36:59.783 --rc geninfo_all_blocks=1 00:36:59.783 --rc geninfo_unexecuted_blocks=1 00:36:59.783 00:36:59.783 ' 00:36:59.783 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:59.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.783 --rc genhtml_branch_coverage=1 00:36:59.783 --rc genhtml_function_coverage=1 00:36:59.783 --rc genhtml_legend=1 00:36:59.783 --rc geninfo_all_blocks=1 00:36:59.783 --rc geninfo_unexecuted_blocks=1 00:36:59.783 00:36:59.783 ' 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:59.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.784 --rc genhtml_branch_coverage=1 00:36:59.784 --rc genhtml_function_coverage=1 00:36:59.784 --rc genhtml_legend=1 00:36:59.784 --rc geninfo_all_blocks=1 00:36:59.784 --rc geninfo_unexecuted_blocks=1 00:36:59.784 00:36:59.784 ' 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:59.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.784 --rc genhtml_branch_coverage=1 00:36:59.784 --rc genhtml_function_coverage=1 00:36:59.784 --rc genhtml_legend=1 00:36:59.784 --rc geninfo_all_blocks=1 00:36:59.784 --rc geninfo_unexecuted_blocks=1 00:36:59.784 00:36:59.784 ' 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:59.784 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:59.784 05:53:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:59.784 05:53:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:37:08.055 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:37:08.055 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:37:08.055 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:37:08.056 Found net devices under 0000:d9:00.0: mlx_0_0 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:37:08.056 Found net devices under 0000:d9:00.1: mlx_0_1 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:37:08.056 05:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:37:08.056 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:08.056 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:37:08.056 altname enp217s0f0np0 00:37:08.056 altname ens818f0np0 00:37:08.056 inet 192.168.100.8/24 scope global mlx_0_0 00:37:08.056 valid_lft forever preferred_lft forever 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:37:08.056 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:08.056 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:37:08.056 altname enp217s0f1np1 00:37:08.056 altname ens818f1np1 00:37:08.056 inet 192.168.100.9/24 scope global mlx_0_1 00:37:08.056 valid_lft forever preferred_lft forever 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:37:08.056 192.168.100.9' 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:37:08.056 192.168.100.9' 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:37:08.056 192.168.100.9' 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:08.056 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3589086 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3589086 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3589086 ']' 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:08.057 05:54:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.057 [2024-11-27 05:54:04.261274] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:37:08.057 [2024-11-27 05:54:04.261372] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:08.057 [2024-11-27 05:54:04.416353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:08.057 [2024-11-27 05:54:04.521194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:08.057 [2024-11-27 05:54:04.521243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:08.057 [2024-11-27 05:54:04.521257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:08.057 [2024-11-27 05:54:04.521270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:08.057 [2024-11-27 05:54:04.521280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:08.057 [2024-11-27 05:54:04.523686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:08.057 [2024-11-27 05:54:04.523777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:08.057 [2024-11-27 05:54:04.523785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:08.625 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:08.625 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:08.625 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:08.625 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:08.625 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.625 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:08.625 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:37:08.625 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.625 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.625 [2024-11-27 05:54:05.147622] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fa729fbd940) succeed. 00:37:08.625 [2024-11-27 05:54:05.157078] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fa729f79940) succeed. 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.884 Malloc0 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.884 [2024-11-27 05:54:05.464371] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:08.884 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.143 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:09.143 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:09.143 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:09.143 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:09.143 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:09.143 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:09.143 { 00:37:09.143 "params": { 00:37:09.143 "name": "Nvme$subsystem", 00:37:09.143 "trtype": "$TEST_TRANSPORT", 00:37:09.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:09.143 "adrfam": "ipv4", 00:37:09.143 "trsvcid": "$NVMF_PORT", 00:37:09.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:09.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:09.144 "hdgst": ${hdgst:-false}, 00:37:09.144 "ddgst": ${ddgst:-false} 00:37:09.144 }, 00:37:09.144 "method": "bdev_nvme_attach_controller" 00:37:09.144 } 00:37:09.144 EOF 00:37:09.144 )") 00:37:09.144 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:09.144 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:09.144 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:09.144 05:54:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:09.144 "params": { 00:37:09.144 "name": "Nvme1", 00:37:09.144 "trtype": "rdma", 00:37:09.144 "traddr": "192.168.100.8", 00:37:09.144 "adrfam": "ipv4", 00:37:09.144 "trsvcid": "4420", 00:37:09.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:09.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:09.144 "hdgst": false, 00:37:09.144 "ddgst": false 00:37:09.144 }, 00:37:09.144 "method": "bdev_nvme_attach_controller" 00:37:09.144 }' 00:37:09.144 [2024-11-27 05:54:05.553820] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:37:09.144 [2024-11-27 05:54:05.553914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589614 ] 00:37:09.144 [2024-11-27 05:54:05.709187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.401 [2024-11-27 05:54:05.812868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.659 Running I/O for 1 seconds... 00:37:11.035 15744.00 IOPS, 61.50 MiB/s 00:37:11.036 Latency(us) 00:37:11.036 [2024-11-27T04:54:07.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:11.036 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:11.036 Verification LBA range: start 0x0 length 0x4000 00:37:11.036 Nvme1n1 : 1.01 15785.88 61.66 0.00 0.00 8060.79 204.80 18140.36 00:37:11.036 [2024-11-27T04:54:07.623Z] =================================================================================================================== 00:37:11.036 [2024-11-27T04:54:07.623Z] Total : 15785.88 61.66 0.00 0.00 8060.79 204.80 18140.36 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3589933 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:11.604 { 00:37:11.604 "params": { 00:37:11.604 "name": "Nvme$subsystem", 00:37:11.604 "trtype": "$TEST_TRANSPORT", 00:37:11.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.604 "adrfam": "ipv4", 00:37:11.604 "trsvcid": "$NVMF_PORT", 00:37:11.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.604 "hdgst": ${hdgst:-false}, 00:37:11.604 "ddgst": ${ddgst:-false} 00:37:11.604 }, 00:37:11.604 "method": "bdev_nvme_attach_controller" 00:37:11.604 } 00:37:11.604 EOF 00:37:11.604 )") 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:11.604 05:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:11.604 "params": { 00:37:11.604 "name": "Nvme1", 00:37:11.604 "trtype": "rdma", 00:37:11.604 "traddr": "192.168.100.8", 00:37:11.604 "adrfam": "ipv4", 00:37:11.604 "trsvcid": "4420", 00:37:11.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:11.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:11.604 "hdgst": false, 00:37:11.604 "ddgst": false 00:37:11.604 }, 00:37:11.604 "method": "bdev_nvme_attach_controller" 00:37:11.604 }' 00:37:11.863 [2024-11-27 05:54:08.210499] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:37:11.863 [2024-11-27 05:54:08.210587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589933 ] 00:37:11.863 [2024-11-27 05:54:08.366215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.122 [2024-11-27 05:54:08.471539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.381 Running I/O for 15 seconds... 00:37:14.694 15744.00 IOPS, 61.50 MiB/s [2024-11-27T04:54:11.281Z] 15845.50 IOPS, 61.90 MiB/s [2024-11-27T04:54:11.281Z] 05:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3589086 00:37:14.694 05:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:15.633 11904.00 IOPS, 46.50 MiB/s [2024-11-27T04:54:12.220Z] [2024-11-27 05:54:12.187310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.633 [2024-11-27 05:54:12.187957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.633 [2024-11-27 05:54:12.187969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.187981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.187992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.634 [2024-11-27 05:54:12.188821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.634 [2024-11-27 05:54:12.188832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.188843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.188854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.188867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.188878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.188890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.188900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.188913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.188924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.188936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.188947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.188959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.188969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.188981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.188993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.635 [2024-11-27 05:54:12.189487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x180d00 00:37:15.635 [2024-11-27 05:54:12.189512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x180d00 00:37:15.635 [2024-11-27 05:54:12.189537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x180d00 00:37:15.635 [2024-11-27 05:54:12.189560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x180d00 00:37:15.635 [2024-11-27 05:54:12.189585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x180d00 00:37:15.635 [2024-11-27 05:54:12.189613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x180d00 00:37:15.635 [2024-11-27 05:54:12.189653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x180d00 00:37:15.635 [2024-11-27 05:54:12.189680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x180d00 00:37:15.635 [2024-11-27 05:54:12.189706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x180d00 00:37:15.635 [2024-11-27 05:54:12.189731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x180d00 00:37:15.635 [2024-11-27 05:54:12.189756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.635 [2024-11-27 05:54:12.189770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.189782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.189796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.189807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.189821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.189832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.189845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.189857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.189870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.189882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.189895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.189907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.189921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.189932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.189945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.189958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.189971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.189985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.189998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.190538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x180d00 00:37:15.636 [2024-11-27 05:54:12.190550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.192790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:15.636 [2024-11-27 05:54:12.192823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:15.636 [2024-11-27 05:54:12.192848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23880 len:8 PRP1 0x0 PRP2 0x0 00:37:15.636 [2024-11-27 05:54:12.192868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.636 [2024-11-27 05:54:12.197366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.894 [2024-11-27 05:54:12.226562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:15.894 [2024-11-27 05:54:12.230197] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:15.894 [2024-11-27 05:54:12.230223] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:15.894 [2024-11-27 05:54:12.230235] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:37:16.718 8928.00 IOPS, 34.88 MiB/s [2024-11-27T04:54:13.305Z] [2024-11-27 05:54:13.234665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:16.718 [2024-11-27 05:54:13.234741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.718 [2024-11-27 05:54:13.235169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.718 [2024-11-27 05:54:13.235185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.718 [2024-11-27 05:54:13.235199] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:16.718 [2024-11-27 05:54:13.235214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.718 [2024-11-27 05:54:13.239632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.718 [2024-11-27 05:54:13.242649] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:16.718 [2024-11-27 05:54:13.242676] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:16.718 [2024-11-27 05:54:13.242688] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:37:17.652 7142.40 IOPS, 27.90 MiB/s [2024-11-27T04:54:14.239Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3589086 Killed "${NVMF_APP[@]}" "$@" 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3590997 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3590997 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3590997 ']' 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:17.652 05:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:17.652 [2024-11-27 05:54:14.235891] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:37:17.652 [2024-11-27 05:54:14.235987] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:17.910 [2024-11-27 05:54:14.246985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:17.910 [2024-11-27 05:54:14.247021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.911 [2024-11-27 05:54:14.247225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.911 [2024-11-27 05:54:14.247241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.911 [2024-11-27 05:54:14.247255] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:17.911 [2024-11-27 05:54:14.247273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.911 [2024-11-27 05:54:14.254561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.911 [2024-11-27 05:54:14.257763] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:17.911 [2024-11-27 05:54:14.257792] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:17.911 [2024-11-27 05:54:14.257805] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:37:17.911 [2024-11-27 05:54:14.400754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:18.169 [2024-11-27 05:54:14.501898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:18.169 [2024-11-27 05:54:14.501944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:18.169 [2024-11-27 05:54:14.501957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:18.169 [2024-11-27 05:54:14.501972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:18.169 [2024-11-27 05:54:14.501982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:18.169 [2024-11-27 05:54:14.504211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:18.169 [2024-11-27 05:54:14.504273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.169 [2024-11-27 05:54:14.504282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:18.685 5952.00 IOPS, 23.25 MiB/s [2024-11-27T04:54:15.272Z] 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:18.685 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:18.685 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:18.685 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:18.685 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.685 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:18.685 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:37:18.685 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.685 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.685 [2024-11-27 05:54:15.125247] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fad9159a940) succeed. 00:37:18.685 [2024-11-27 05:54:15.134710] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fad91556940) succeed. 00:37:18.685 [2024-11-27 05:54:15.261864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:18.685 [2024-11-27 05:54:15.261914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.685 [2024-11-27 05:54:15.262118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.685 [2024-11-27 05:54:15.262134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.685 [2024-11-27 05:54:15.262149] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:18.685 [2024-11-27 05:54:15.262166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.943 [2024-11-27 05:54:15.270754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.943 [2024-11-27 05:54:15.274158] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:18.943 [2024-11-27 05:54:15.274189] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:18.943 [2024-11-27 05:54:15.274201] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.943 Malloc0 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.943 [2024-11-27 05:54:15.428022] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.943 05:54:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3589933 00:37:19.768 5101.71 IOPS, 19.93 MiB/s [2024-11-27T04:54:16.355Z] [2024-11-27 05:54:16.278473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:19.768 [2024-11-27 05:54:16.278511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.768 [2024-11-27 05:54:16.278717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.768 [2024-11-27 05:54:16.278734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.768 [2024-11-27 05:54:16.278748] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:19.768 [2024-11-27 05:54:16.278765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.768 [2024-11-27 05:54:16.286104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.768 [2024-11-27 05:54:16.329260] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:21.708 5570.00 IOPS, 21.76 MiB/s [2024-11-27T04:54:19.230Z] 6698.22 IOPS, 26.16 MiB/s [2024-11-27T04:54:20.163Z] 7599.10 IOPS, 29.68 MiB/s [2024-11-27T04:54:21.097Z] 8335.27 IOPS, 32.56 MiB/s [2024-11-27T04:54:22.031Z] 8943.33 IOPS, 34.93 MiB/s [2024-11-27T04:54:22.966Z] 9462.62 IOPS, 36.96 MiB/s [2024-11-27T04:54:24.340Z] 9909.07 IOPS, 38.71 MiB/s [2024-11-27T04:54:24.340Z] 10296.07 IOPS, 40.22 MiB/s 00:37:27.753 Latency(us) 00:37:27.753 [2024-11-27T04:54:24.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:27.753 Verification LBA range: start 0x0 length 0x4000 00:37:27.753 Nvme1n1 : 15.01 10298.59 40.23 12589.29 0.00 5570.21 724.17 1067030.94 00:37:27.753 [2024-11-27T04:54:24.340Z] =================================================================================================================== 00:37:27.753 [2024-11-27T04:54:24.340Z] Total : 10298.59 40.23 12589.29 0.00 5570.21 724.17 1067030.94 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:28.319 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:37:28.319 rmmod nvme_rdma 00:37:28.577 rmmod nvme_fabrics 00:37:28.577 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:28.577 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:28.577 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:28.577 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3590997 ']' 00:37:28.577 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3590997 00:37:28.577 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3590997 ']' 00:37:28.577 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3590997 00:37:28.577 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:37:28.577 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:28.577 05:54:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3590997 00:37:28.577 05:54:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:28.577 05:54:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:28.577 05:54:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3590997' 00:37:28.577 killing process with pid 3590997 00:37:28.578 05:54:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3590997 00:37:28.578 05:54:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3590997 00:37:30.478 05:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:30.478 05:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:37:30.478 00:37:30.478 real 0m30.898s 00:37:30.478 user 1m16.493s 00:37:30.478 sys 0m8.230s 00:37:30.478 05:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.478 05:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:30.478 ************************************ 00:37:30.478 END TEST nvmf_bdevperf 00:37:30.478 ************************************ 00:37:30.478 05:54:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:37:30.478 05:54:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:30.478 05:54:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.478 05:54:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.478 ************************************ 00:37:30.478 START TEST nvmf_target_disconnect 00:37:30.478 ************************************ 00:37:30.478 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:37:30.478 * Looking for test storage... 00:37:30.478 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:30.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.479 --rc genhtml_branch_coverage=1 00:37:30.479 --rc genhtml_function_coverage=1 00:37:30.479 --rc genhtml_legend=1 00:37:30.479 --rc geninfo_all_blocks=1 00:37:30.479 --rc geninfo_unexecuted_blocks=1 00:37:30.479 00:37:30.479 ' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:30.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.479 --rc genhtml_branch_coverage=1 00:37:30.479 --rc genhtml_function_coverage=1 00:37:30.479 --rc genhtml_legend=1 00:37:30.479 --rc geninfo_all_blocks=1 00:37:30.479 --rc geninfo_unexecuted_blocks=1 00:37:30.479 00:37:30.479 ' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:30.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.479 --rc genhtml_branch_coverage=1 00:37:30.479 --rc genhtml_function_coverage=1 00:37:30.479 --rc genhtml_legend=1 00:37:30.479 --rc geninfo_all_blocks=1 00:37:30.479 --rc geninfo_unexecuted_blocks=1 00:37:30.479 00:37:30.479 ' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:30.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.479 --rc genhtml_branch_coverage=1 00:37:30.479 --rc genhtml_function_coverage=1 00:37:30.479 --rc genhtml_legend=1 00:37:30.479 --rc geninfo_all_blocks=1 00:37:30.479 --rc geninfo_unexecuted_blocks=1 00:37:30.479 00:37:30.479 ' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:30.479 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:30.479 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:30.480 05:54:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:38.586 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:37:38.587 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:37:38.587 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:37:38.587 Found net devices under 0000:d9:00.0: mlx_0_0 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:37:38.587 Found net devices under 0000:d9:00.1: mlx_0_1 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:37:38.587 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:37:38.588 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:38.588 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:37:38.588 altname enp217s0f0np0 00:37:38.588 altname ens818f0np0 00:37:38.588 inet 192.168.100.8/24 scope global mlx_0_0 00:37:38.588 valid_lft forever preferred_lft forever 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:37:38.588 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:38.588 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:37:38.588 altname enp217s0f1np1 00:37:38.588 altname ens818f1np1 00:37:38.588 inet 192.168.100.9/24 scope global mlx_0_1 00:37:38.588 valid_lft forever preferred_lft forever 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:37:38.588 05:54:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:37:38.588 192.168.100.9' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:37:38.588 192.168.100.9' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:37:38.588 192.168.100.9' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:38.588 ************************************ 00:37:38.588 START TEST nvmf_target_disconnect_tc1 00:37:38.588 ************************************ 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:37:38.588 05:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:38.846 [2024-11-27 05:54:35.332059] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:38.846 [2024-11-27 05:54:35.332145] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:38.846 [2024-11-27 05:54:35.332163] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d6ec0 00:37:39.781 [2024-11-27 05:54:36.336325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:37:39.781 [2024-11-27 05:54:36.336372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:37:39.781 [2024-11-27 05:54:36.336392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:37:39.781 [2024-11-27 05:54:36.336455] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:39.781 [2024-11-27 05:54:36.336473] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:39.781 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:37:39.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:40.039 Initializing NVMe Controllers 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:40.039 00:37:40.039 real 0m1.331s 00:37:40.039 user 0m0.914s 00:37:40.039 sys 0m0.403s 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:40.039 ************************************ 00:37:40.039 END TEST nvmf_target_disconnect_tc1 00:37:40.039 ************************************ 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:40.039 ************************************ 00:37:40.039 START TEST nvmf_target_disconnect_tc2 00:37:40.039 ************************************ 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3597311 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3597311 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3597311 ']' 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:40.039 05:54:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.298 [2024-11-27 05:54:36.627411] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:37:40.298 [2024-11-27 05:54:36.627504] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:40.298 [2024-11-27 05:54:36.793586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:40.556 [2024-11-27 05:54:36.891252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:40.556 [2024-11-27 05:54:36.891297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:40.556 [2024-11-27 05:54:36.891309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:40.556 [2024-11-27 05:54:36.891321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:40.556 [2024-11-27 05:54:36.891330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:40.556 [2024-11-27 05:54:36.893957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:40.556 [2024-11-27 05:54:36.894086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:40.556 [2024-11-27 05:54:36.894154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:40.556 [2024-11-27 05:54:36.894179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.123 Malloc0 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.123 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.123 [2024-11-27 05:54:37.577086] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7fd747348940) succeed. 00:37:41.123 [2024-11-27 05:54:37.586881] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7fd74651a940) succeed. 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.382 [2024-11-27 05:54:37.870032] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3597480 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:41.382 05:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:43.914 05:54:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3597311 00:37:43.914 05:54:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Write completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 Read completed with error (sct=0, sc=8) 00:37:44.848 starting I/O failed 00:37:44.848 [2024-11-27 05:54:41.185987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.415 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3597311 Killed "${NVMF_APP[@]}" "$@" 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3598166 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3598166 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3598166 ']' 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:45.415 05:54:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:45.415 [2024-11-27 05:54:41.988859] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:37:45.415 [2024-11-27 05:54:41.988960] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:45.673 [2024-11-27 05:54:42.167723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Read completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Read completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Read completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Read completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Read completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Read completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Read completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Read completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Read completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.673 Write completed with error (sct=0, sc=8) 00:37:45.673 starting I/O failed 00:37:45.674 Write completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 Read completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 Write completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 Read completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 Write completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 Read completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 Read completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 Read completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 Read completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 Read completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 Read completed with error (sct=0, sc=8) 00:37:45.674 starting I/O failed 00:37:45.674 [2024-11-27 05:54:42.191640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:45.932 [2024-11-27 05:54:42.270795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:45.932 [2024-11-27 05:54:42.270839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:45.932 [2024-11-27 05:54:42.270852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:45.932 [2024-11-27 05:54:42.270864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:45.932 [2024-11-27 05:54:42.270874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:45.932 [2024-11-27 05:54:42.273645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:45.932 [2024-11-27 05:54:42.273720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:45.932 [2024-11-27 05:54:42.273814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:45.932 [2024-11-27 05:54:42.273839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.498 Malloc0 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.498 05:54:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.498 [2024-11-27 05:54:42.964004] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7f7fe2303940) succeed. 00:37:46.498 [2024-11-27 05:54:42.973969] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7f7fe2148940) succeed. 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Write completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.756 starting I/O failed 00:37:46.756 Read completed with error (sct=0, sc=8) 00:37:46.757 starting I/O failed 00:37:46.757 Read completed with error (sct=0, sc=8) 00:37:46.757 starting I/O failed 00:37:46.757 Read completed with error (sct=0, sc=8) 00:37:46.757 starting I/O failed 00:37:46.757 [2024-11-27 05:54:43.197348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.757 [2024-11-27 05:54:43.255288] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.757 05:54:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3597480 00:37:47.698 Write completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Write completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Write completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Write completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Write completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Write completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Write completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Write completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Write completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Read completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.698 Write completed with error (sct=0, sc=8) 00:37:47.698 starting I/O failed 00:37:47.699 Write completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 Write completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 Write completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 Write completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 Read completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 Read completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 Write completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 Read completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 Read completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 Read completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 Write completed with error (sct=0, sc=8) 00:37:47.699 starting I/O failed 00:37:47.699 [2024-11-27 05:54:44.202830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.699 [2024-11-27 05:54:44.216669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.699 [2024-11-27 05:54:44.216768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.699 [2024-11-27 05:54:44.216805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.699 [2024-11-27 05:54:44.216822] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.699 [2024-11-27 05:54:44.216837] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.699 [2024-11-27 05:54:44.226659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.699 qpair failed and we were unable to recover it. 00:37:47.699 [2024-11-27 05:54:44.236160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.699 [2024-11-27 05:54:44.236236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.699 [2024-11-27 05:54:44.236263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.699 [2024-11-27 05:54:44.236280] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.699 [2024-11-27 05:54:44.236293] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.699 [2024-11-27 05:54:44.246290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.699 qpair failed and we were unable to recover it. 00:37:47.699 [2024-11-27 05:54:44.256272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.699 [2024-11-27 05:54:44.256346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.699 [2024-11-27 05:54:44.256375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.699 [2024-11-27 05:54:44.256390] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.699 [2024-11-27 05:54:44.256404] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.699 [2024-11-27 05:54:44.266642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.699 qpair failed and we were unable to recover it. 00:37:47.699 [2024-11-27 05:54:44.276219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.699 [2024-11-27 05:54:44.276290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.699 [2024-11-27 05:54:44.276315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.699 [2024-11-27 05:54:44.276331] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.699 [2024-11-27 05:54:44.276343] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.958 [2024-11-27 05:54:44.286639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.958 qpair failed and we were unable to recover it. 00:37:47.958 [2024-11-27 05:54:44.296356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.958 [2024-11-27 05:54:44.296429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.958 [2024-11-27 05:54:44.296457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.958 [2024-11-27 05:54:44.296471] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.958 [2024-11-27 05:54:44.296485] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.958 [2024-11-27 05:54:44.306623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.958 qpair failed and we were unable to recover it. 00:37:47.958 [2024-11-27 05:54:44.316366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.958 [2024-11-27 05:54:44.316435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.958 [2024-11-27 05:54:44.316459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.958 [2024-11-27 05:54:44.316477] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.958 [2024-11-27 05:54:44.316489] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.958 [2024-11-27 05:54:44.326699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.958 qpair failed and we were unable to recover it. 00:37:47.958 [2024-11-27 05:54:44.336424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.958 [2024-11-27 05:54:44.336489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.958 [2024-11-27 05:54:44.336519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.958 [2024-11-27 05:54:44.336533] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.958 [2024-11-27 05:54:44.336547] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.958 [2024-11-27 05:54:44.346878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.958 qpair failed and we were unable to recover it. 00:37:47.958 [2024-11-27 05:54:44.356551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.958 [2024-11-27 05:54:44.356630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.958 [2024-11-27 05:54:44.356654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.958 [2024-11-27 05:54:44.356670] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.958 [2024-11-27 05:54:44.356682] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.958 [2024-11-27 05:54:44.366833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.958 qpair failed and we were unable to recover it. 00:37:47.958 [2024-11-27 05:54:44.376595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.958 [2024-11-27 05:54:44.376657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.958 [2024-11-27 05:54:44.376688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.958 [2024-11-27 05:54:44.376702] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.958 [2024-11-27 05:54:44.376721] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.958 [2024-11-27 05:54:44.386682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.958 qpair failed and we were unable to recover it. 00:37:47.958 [2024-11-27 05:54:44.396641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.958 [2024-11-27 05:54:44.396703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.958 [2024-11-27 05:54:44.396727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.958 [2024-11-27 05:54:44.396744] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.958 [2024-11-27 05:54:44.396756] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.958 [2024-11-27 05:54:44.406837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.958 qpair failed and we were unable to recover it. 00:37:47.958 [2024-11-27 05:54:44.416713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.958 [2024-11-27 05:54:44.416786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.958 [2024-11-27 05:54:44.416813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.958 [2024-11-27 05:54:44.416827] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.958 [2024-11-27 05:54:44.416841] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.958 [2024-11-27 05:54:44.426974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.958 qpair failed and we were unable to recover it. 00:37:47.958 [2024-11-27 05:54:44.436781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.958 [2024-11-27 05:54:44.436849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.958 [2024-11-27 05:54:44.436873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.958 [2024-11-27 05:54:44.436890] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.958 [2024-11-27 05:54:44.436902] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.958 [2024-11-27 05:54:44.446916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.958 qpair failed and we were unable to recover it. 00:37:47.958 [2024-11-27 05:54:44.456771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.958 [2024-11-27 05:54:44.456836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.958 [2024-11-27 05:54:44.456864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.958 [2024-11-27 05:54:44.456878] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.959 [2024-11-27 05:54:44.456896] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.959 [2024-11-27 05:54:44.466980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.959 qpair failed and we were unable to recover it. 00:37:47.959 [2024-11-27 05:54:44.476828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.959 [2024-11-27 05:54:44.476892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.959 [2024-11-27 05:54:44.476917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.959 [2024-11-27 05:54:44.476933] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.959 [2024-11-27 05:54:44.476945] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.959 [2024-11-27 05:54:44.487001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.959 qpair failed and we were unable to recover it. 00:37:47.959 [2024-11-27 05:54:44.496921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.959 [2024-11-27 05:54:44.496983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.959 [2024-11-27 05:54:44.497009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.959 [2024-11-27 05:54:44.497023] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.959 [2024-11-27 05:54:44.497036] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.959 [2024-11-27 05:54:44.507355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.959 qpair failed and we were unable to recover it. 00:37:47.959 [2024-11-27 05:54:44.517033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.959 [2024-11-27 05:54:44.517099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.959 [2024-11-27 05:54:44.517124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.959 [2024-11-27 05:54:44.517143] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.959 [2024-11-27 05:54:44.517155] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:47.959 [2024-11-27 05:54:44.527338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.959 qpair failed and we were unable to recover it. 00:37:47.959 [2024-11-27 05:54:44.537029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.959 [2024-11-27 05:54:44.537094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.959 [2024-11-27 05:54:44.537121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.959 [2024-11-27 05:54:44.537135] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.959 [2024-11-27 05:54:44.537149] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.218 [2024-11-27 05:54:44.547254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.218 qpair failed and we were unable to recover it. 00:37:48.218 [2024-11-27 05:54:44.557137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.218 [2024-11-27 05:54:44.557206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.218 [2024-11-27 05:54:44.557231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.218 [2024-11-27 05:54:44.557247] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.218 [2024-11-27 05:54:44.557259] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.218 [2024-11-27 05:54:44.567159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.218 qpair failed and we were unable to recover it. 00:37:48.218 [2024-11-27 05:54:44.577225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.218 [2024-11-27 05:54:44.577289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.218 [2024-11-27 05:54:44.577321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.218 [2024-11-27 05:54:44.577335] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.218 [2024-11-27 05:54:44.577349] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.218 [2024-11-27 05:54:44.587424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.218 qpair failed and we were unable to recover it. 00:37:48.218 [2024-11-27 05:54:44.597145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.218 [2024-11-27 05:54:44.597214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.218 [2024-11-27 05:54:44.597239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.218 [2024-11-27 05:54:44.597255] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.218 [2024-11-27 05:54:44.597267] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.218 [2024-11-27 05:54:44.607519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.218 qpair failed and we were unable to recover it. 00:37:48.218 [2024-11-27 05:54:44.617283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.218 [2024-11-27 05:54:44.617347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.218 [2024-11-27 05:54:44.617373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.218 [2024-11-27 05:54:44.617388] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.218 [2024-11-27 05:54:44.617401] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.218 [2024-11-27 05:54:44.627605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.218 qpair failed and we were unable to recover it. 00:37:48.218 [2024-11-27 05:54:44.637376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.218 [2024-11-27 05:54:44.637440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.218 [2024-11-27 05:54:44.637465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.218 [2024-11-27 05:54:44.637481] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.218 [2024-11-27 05:54:44.637492] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.218 [2024-11-27 05:54:44.647612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.218 qpair failed and we were unable to recover it. 00:37:48.218 [2024-11-27 05:54:44.657412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.218 [2024-11-27 05:54:44.657497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.218 [2024-11-27 05:54:44.657527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.218 [2024-11-27 05:54:44.657541] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.218 [2024-11-27 05:54:44.657554] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.218 [2024-11-27 05:54:44.667746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.218 qpair failed and we were unable to recover it. 00:37:48.218 [2024-11-27 05:54:44.677512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.218 [2024-11-27 05:54:44.677580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.218 [2024-11-27 05:54:44.677605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.218 [2024-11-27 05:54:44.677628] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.218 [2024-11-27 05:54:44.677639] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.218 [2024-11-27 05:54:44.687719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.218 qpair failed and we were unable to recover it. 00:37:48.218 [2024-11-27 05:54:44.697554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.219 [2024-11-27 05:54:44.697619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.219 [2024-11-27 05:54:44.697646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.219 [2024-11-27 05:54:44.697661] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.219 [2024-11-27 05:54:44.697678] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.219 [2024-11-27 05:54:44.708009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.219 qpair failed and we were unable to recover it. 00:37:48.219 [2024-11-27 05:54:44.717562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.219 [2024-11-27 05:54:44.717635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.219 [2024-11-27 05:54:44.717660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.219 [2024-11-27 05:54:44.717679] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.219 [2024-11-27 05:54:44.717692] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.219 [2024-11-27 05:54:44.727955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.219 qpair failed and we were unable to recover it. 00:37:48.219 [2024-11-27 05:54:44.737646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.219 [2024-11-27 05:54:44.737710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.219 [2024-11-27 05:54:44.737738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.219 [2024-11-27 05:54:44.737753] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.219 [2024-11-27 05:54:44.737767] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.219 [2024-11-27 05:54:44.747848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.219 qpair failed and we were unable to recover it. 00:37:48.219 [2024-11-27 05:54:44.757613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.219 [2024-11-27 05:54:44.757682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.219 [2024-11-27 05:54:44.757706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.219 [2024-11-27 05:54:44.757722] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.219 [2024-11-27 05:54:44.757734] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.219 [2024-11-27 05:54:44.768057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.219 qpair failed and we were unable to recover it. 00:37:48.219 [2024-11-27 05:54:44.777745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.219 [2024-11-27 05:54:44.777808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.219 [2024-11-27 05:54:44.777835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.219 [2024-11-27 05:54:44.777849] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.219 [2024-11-27 05:54:44.777864] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.219 [2024-11-27 05:54:44.788052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.219 qpair failed and we were unable to recover it. 00:37:48.219 [2024-11-27 05:54:44.797897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.219 [2024-11-27 05:54:44.797967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.219 [2024-11-27 05:54:44.797992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.219 [2024-11-27 05:54:44.798007] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.219 [2024-11-27 05:54:44.798022] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.478 [2024-11-27 05:54:44.808225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.478 qpair failed and we were unable to recover it. 00:37:48.478 [2024-11-27 05:54:44.817943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.478 [2024-11-27 05:54:44.818002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.478 [2024-11-27 05:54:44.818029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.478 [2024-11-27 05:54:44.818044] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.478 [2024-11-27 05:54:44.818058] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.478 [2024-11-27 05:54:44.828253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.478 qpair failed and we were unable to recover it. 00:37:48.478 [2024-11-27 05:54:44.837937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.478 [2024-11-27 05:54:44.838002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.478 [2024-11-27 05:54:44.838026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.478 [2024-11-27 05:54:44.838045] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.478 [2024-11-27 05:54:44.838057] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.478 [2024-11-27 05:54:44.848109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.478 qpair failed and we were unable to recover it. 00:37:48.478 [2024-11-27 05:54:44.858158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.478 [2024-11-27 05:54:44.858224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.478 [2024-11-27 05:54:44.858254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.478 [2024-11-27 05:54:44.858268] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.478 [2024-11-27 05:54:44.858282] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.478 [2024-11-27 05:54:44.868412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.478 qpair failed and we were unable to recover it. 00:37:48.478 [2024-11-27 05:54:44.878072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.478 [2024-11-27 05:54:44.878137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.478 [2024-11-27 05:54:44.878161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.478 [2024-11-27 05:54:44.878178] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.478 [2024-11-27 05:54:44.878190] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.478 [2024-11-27 05:54:44.888425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.478 qpair failed and we were unable to recover it. 00:37:48.478 [2024-11-27 05:54:44.898172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.478 [2024-11-27 05:54:44.898240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.478 [2024-11-27 05:54:44.898268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.478 [2024-11-27 05:54:44.898282] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.478 [2024-11-27 05:54:44.898295] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.478 [2024-11-27 05:54:44.908456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.478 qpair failed and we were unable to recover it. 00:37:48.478 [2024-11-27 05:54:44.918188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.478 [2024-11-27 05:54:44.918254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.478 [2024-11-27 05:54:44.918278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.478 [2024-11-27 05:54:44.918296] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.478 [2024-11-27 05:54:44.918308] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.478 [2024-11-27 05:54:44.928645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.478 qpair failed and we were unable to recover it. 00:37:48.478 [2024-11-27 05:54:44.938271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.478 [2024-11-27 05:54:44.938339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.479 [2024-11-27 05:54:44.938367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.479 [2024-11-27 05:54:44.938381] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.479 [2024-11-27 05:54:44.938396] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.479 [2024-11-27 05:54:44.948461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.479 qpair failed and we were unable to recover it. 00:37:48.479 [2024-11-27 05:54:44.958308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.479 [2024-11-27 05:54:44.958369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.479 [2024-11-27 05:54:44.958393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.479 [2024-11-27 05:54:44.958410] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.479 [2024-11-27 05:54:44.958422] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.479 [2024-11-27 05:54:44.968522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.479 qpair failed and we were unable to recover it. 00:37:48.479 [2024-11-27 05:54:44.978422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.479 [2024-11-27 05:54:44.978484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.479 [2024-11-27 05:54:44.978517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.479 [2024-11-27 05:54:44.978531] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.479 [2024-11-27 05:54:44.978545] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.479 [2024-11-27 05:54:44.988805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.479 qpair failed and we were unable to recover it. 00:37:48.479 [2024-11-27 05:54:44.998540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.479 [2024-11-27 05:54:44.998603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.479 [2024-11-27 05:54:44.998634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.479 [2024-11-27 05:54:44.998650] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.479 [2024-11-27 05:54:44.998662] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.479 [2024-11-27 05:54:45.008626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.479 qpair failed and we were unable to recover it. 00:37:48.479 [2024-11-27 05:54:45.018631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.479 [2024-11-27 05:54:45.018695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.479 [2024-11-27 05:54:45.018721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.479 [2024-11-27 05:54:45.018735] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.479 [2024-11-27 05:54:45.018751] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.479 [2024-11-27 05:54:45.028748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.479 qpair failed and we were unable to recover it. 00:37:48.479 [2024-11-27 05:54:45.038588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.479 [2024-11-27 05:54:45.038653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.479 [2024-11-27 05:54:45.038677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.479 [2024-11-27 05:54:45.038693] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.479 [2024-11-27 05:54:45.038705] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.479 [2024-11-27 05:54:45.048722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.479 qpair failed and we were unable to recover it. 00:37:48.479 [2024-11-27 05:54:45.058629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.479 [2024-11-27 05:54:45.058696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.479 [2024-11-27 05:54:45.058723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.479 [2024-11-27 05:54:45.058740] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.479 [2024-11-27 05:54:45.058754] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.738 [2024-11-27 05:54:45.069046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.738 qpair failed and we were unable to recover it. 00:37:48.738 [2024-11-27 05:54:45.078692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.738 [2024-11-27 05:54:45.078758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.738 [2024-11-27 05:54:45.078783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.738 [2024-11-27 05:54:45.078799] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.738 [2024-11-27 05:54:45.078811] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.738 [2024-11-27 05:54:45.088921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.738 qpair failed and we were unable to recover it. 00:37:48.738 [2024-11-27 05:54:45.098762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.738 [2024-11-27 05:54:45.098826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.738 [2024-11-27 05:54:45.098853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.738 [2024-11-27 05:54:45.098867] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.738 [2024-11-27 05:54:45.098882] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.738 [2024-11-27 05:54:45.112019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.738 qpair failed and we were unable to recover it. 00:37:48.738 [2024-11-27 05:54:45.118757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.738 [2024-11-27 05:54:45.118826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.738 [2024-11-27 05:54:45.118851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.738 [2024-11-27 05:54:45.118867] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.738 [2024-11-27 05:54:45.118879] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.738 [2024-11-27 05:54:45.129113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.738 qpair failed and we were unable to recover it. 00:37:48.738 [2024-11-27 05:54:45.138771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.738 [2024-11-27 05:54:45.138834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.738 [2024-11-27 05:54:45.138861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.738 [2024-11-27 05:54:45.138876] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.738 [2024-11-27 05:54:45.138889] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.738 [2024-11-27 05:54:45.149135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.738 qpair failed and we were unable to recover it. 00:37:48.738 [2024-11-27 05:54:45.158961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.738 [2024-11-27 05:54:45.159023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.738 [2024-11-27 05:54:45.159048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.738 [2024-11-27 05:54:45.159067] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.738 [2024-11-27 05:54:45.159079] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.738 [2024-11-27 05:54:45.169265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.738 qpair failed and we were unable to recover it. 00:37:48.738 [2024-11-27 05:54:45.179049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.738 [2024-11-27 05:54:45.179114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.738 [2024-11-27 05:54:45.179141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.738 [2024-11-27 05:54:45.179156] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.738 [2024-11-27 05:54:45.179170] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.738 [2024-11-27 05:54:45.189351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.738 qpair failed and we were unable to recover it. 00:37:48.738 [2024-11-27 05:54:45.199093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.738 [2024-11-27 05:54:45.199158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.738 [2024-11-27 05:54:45.199182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.738 [2024-11-27 05:54:45.199199] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.738 [2024-11-27 05:54:45.199210] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.738 [2024-11-27 05:54:45.209366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.738 qpair failed and we were unable to recover it. 00:37:48.738 [2024-11-27 05:54:45.219001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.738 [2024-11-27 05:54:45.219067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.738 [2024-11-27 05:54:45.219094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.738 [2024-11-27 05:54:45.219109] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.738 [2024-11-27 05:54:45.219123] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.738 [2024-11-27 05:54:45.229430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.738 qpair failed and we were unable to recover it. 00:37:48.738 [2024-11-27 05:54:45.239228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.738 [2024-11-27 05:54:45.239296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.738 [2024-11-27 05:54:45.239321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.738 [2024-11-27 05:54:45.239338] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.738 [2024-11-27 05:54:45.239350] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.738 [2024-11-27 05:54:45.249641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.738 qpair failed and we were unable to recover it. 00:37:48.739 [2024-11-27 05:54:45.260586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.739 [2024-11-27 05:54:45.260656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.739 [2024-11-27 05:54:45.260684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.739 [2024-11-27 05:54:45.260698] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.739 [2024-11-27 05:54:45.260712] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.739 [2024-11-27 05:54:45.269546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.739 qpair failed and we were unable to recover it. 00:37:48.739 [2024-11-27 05:54:45.279388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.739 [2024-11-27 05:54:45.279456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.739 [2024-11-27 05:54:45.279481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.739 [2024-11-27 05:54:45.279498] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.739 [2024-11-27 05:54:45.279510] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.739 [2024-11-27 05:54:45.289634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.739 qpair failed and we were unable to recover it. 00:37:48.739 [2024-11-27 05:54:45.299362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.739 [2024-11-27 05:54:45.299424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.739 [2024-11-27 05:54:45.299455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.739 [2024-11-27 05:54:45.299469] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.739 [2024-11-27 05:54:45.299483] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.739 [2024-11-27 05:54:45.309800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.739 qpair failed and we were unable to recover it. 00:37:48.739 [2024-11-27 05:54:45.319517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.739 [2024-11-27 05:54:45.319580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.739 [2024-11-27 05:54:45.319614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.739 [2024-11-27 05:54:45.319631] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.739 [2024-11-27 05:54:45.319643] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.998 [2024-11-27 05:54:45.329889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.998 qpair failed and we were unable to recover it. 00:37:48.998 [2024-11-27 05:54:45.339581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.998 [2024-11-27 05:54:45.339647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.998 [2024-11-27 05:54:45.339674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.998 [2024-11-27 05:54:45.339688] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.998 [2024-11-27 05:54:45.339704] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.998 [2024-11-27 05:54:45.349842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.998 qpair failed and we were unable to recover it. 00:37:48.998 [2024-11-27 05:54:45.359698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.998 [2024-11-27 05:54:45.359759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.998 [2024-11-27 05:54:45.359783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.998 [2024-11-27 05:54:45.359800] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.998 [2024-11-27 05:54:45.359812] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.998 [2024-11-27 05:54:45.369884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.998 qpair failed and we were unable to recover it. 00:37:48.998 [2024-11-27 05:54:45.379883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.998 [2024-11-27 05:54:45.379945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.998 [2024-11-27 05:54:45.379972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.998 [2024-11-27 05:54:45.379986] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.998 [2024-11-27 05:54:45.380000] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.998 [2024-11-27 05:54:45.390016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.998 qpair failed and we were unable to recover it. 00:37:48.998 [2024-11-27 05:54:45.399786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.998 [2024-11-27 05:54:45.399853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.998 [2024-11-27 05:54:45.399878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.998 [2024-11-27 05:54:45.399897] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.998 [2024-11-27 05:54:45.399909] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.998 [2024-11-27 05:54:45.409937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.998 qpair failed and we were unable to recover it. 00:37:48.998 [2024-11-27 05:54:45.419892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.998 [2024-11-27 05:54:45.419947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.998 [2024-11-27 05:54:45.419975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.998 [2024-11-27 05:54:45.419989] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.998 [2024-11-27 05:54:45.420003] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.998 [2024-11-27 05:54:45.430048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.998 qpair failed and we were unable to recover it. 00:37:48.998 [2024-11-27 05:54:45.440004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.998 [2024-11-27 05:54:45.440071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.998 [2024-11-27 05:54:45.440095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.998 [2024-11-27 05:54:45.440111] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.998 [2024-11-27 05:54:45.440123] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.998 [2024-11-27 05:54:45.450057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.998 qpair failed and we were unable to recover it. 00:37:48.998 [2024-11-27 05:54:45.459870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.998 [2024-11-27 05:54:45.459932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.998 [2024-11-27 05:54:45.459960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.998 [2024-11-27 05:54:45.459975] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.998 [2024-11-27 05:54:45.459989] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.998 [2024-11-27 05:54:45.470246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.998 qpair failed and we were unable to recover it. 00:37:48.998 [2024-11-27 05:54:45.480003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.998 [2024-11-27 05:54:45.480064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.998 [2024-11-27 05:54:45.480089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.998 [2024-11-27 05:54:45.480108] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.998 [2024-11-27 05:54:45.480120] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.998 [2024-11-27 05:54:45.490224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.998 qpair failed and we were unable to recover it. 00:37:48.998 [2024-11-27 05:54:45.500037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.998 [2024-11-27 05:54:45.500097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.998 [2024-11-27 05:54:45.500122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.998 [2024-11-27 05:54:45.500136] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.998 [2024-11-27 05:54:45.500148] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.998 [2024-11-27 05:54:45.510190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.998 qpair failed and we were unable to recover it. 00:37:48.998 [2024-11-27 05:54:45.520121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.999 [2024-11-27 05:54:45.520182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.999 [2024-11-27 05:54:45.520206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.999 [2024-11-27 05:54:45.520221] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.999 [2024-11-27 05:54:45.520232] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.999 [2024-11-27 05:54:45.530398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.999 qpair failed and we were unable to recover it. 00:37:48.999 [2024-11-27 05:54:45.540175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.999 [2024-11-27 05:54:45.540240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.999 [2024-11-27 05:54:45.540264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.999 [2024-11-27 05:54:45.540278] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.999 [2024-11-27 05:54:45.540290] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.999 [2024-11-27 05:54:45.550503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.999 qpair failed and we were unable to recover it. 00:37:48.999 [2024-11-27 05:54:45.561449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.999 [2024-11-27 05:54:45.561509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.999 [2024-11-27 05:54:45.561533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.999 [2024-11-27 05:54:45.561548] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.999 [2024-11-27 05:54:45.561559] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:48.999 [2024-11-27 05:54:45.570330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.999 qpair failed and we were unable to recover it. 00:37:48.999 [2024-11-27 05:54:45.580348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.999 [2024-11-27 05:54:45.580411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.999 [2024-11-27 05:54:45.580436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.999 [2024-11-27 05:54:45.580450] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.999 [2024-11-27 05:54:45.580462] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.257 [2024-11-27 05:54:45.590418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.600389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.600451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.600476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.600490] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.600501] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.610512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.620312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.620374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.620399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.620414] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.620425] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.630618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.640494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.640550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.640574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.640588] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.640600] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.650743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.660438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.660494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.660522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.660537] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.660549] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.670819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.680572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.680632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.680657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.680671] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.680682] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.690814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.700581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.700645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.700670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.700685] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.700697] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.710924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.720567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.720627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.720652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.720666] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.720677] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.730775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.740681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.740740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.740765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.740779] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.740794] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.751432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.760763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.760823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.760846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.760860] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.760871] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.771043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.780885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.780938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.780962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.780976] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.780988] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.791200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.800877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.800939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.800964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.800978] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.800990] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.811182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.820967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.821023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.821047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.258 [2024-11-27 05:54:45.821061] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.258 [2024-11-27 05:54:45.821073] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.258 [2024-11-27 05:54:45.831254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.258 qpair failed and we were unable to recover it. 00:37:49.258 [2024-11-27 05:54:45.841032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.258 [2024-11-27 05:54:45.841100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.258 [2024-11-27 05:54:45.841126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.259 [2024-11-27 05:54:45.841140] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.259 [2024-11-27 05:54:45.841152] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.517 [2024-11-27 05:54:45.851281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.517 qpair failed and we were unable to recover it. 00:37:49.517 [2024-11-27 05:54:45.861122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.517 [2024-11-27 05:54:45.861182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.517 [2024-11-27 05:54:45.861208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.517 [2024-11-27 05:54:45.861223] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.517 [2024-11-27 05:54:45.861235] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.517 [2024-11-27 05:54:45.871414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.517 qpair failed and we were unable to recover it. 00:37:49.517 [2024-11-27 05:54:45.881145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.517 [2024-11-27 05:54:45.881205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.517 [2024-11-27 05:54:45.881229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.517 [2024-11-27 05:54:45.881244] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.517 [2024-11-27 05:54:45.881255] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.517 [2024-11-27 05:54:45.891435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.517 qpair failed and we were unable to recover it. 00:37:49.517 [2024-11-27 05:54:45.901174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.517 [2024-11-27 05:54:45.901230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.517 [2024-11-27 05:54:45.901254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.517 [2024-11-27 05:54:45.901268] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.517 [2024-11-27 05:54:45.901280] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.517 [2024-11-27 05:54:45.911643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.517 qpair failed and we were unable to recover it. 00:37:49.517 [2024-11-27 05:54:45.921374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.517 [2024-11-27 05:54:45.921433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.517 [2024-11-27 05:54:45.921458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.517 [2024-11-27 05:54:45.921471] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.517 [2024-11-27 05:54:45.921483] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.517 [2024-11-27 05:54:45.931558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.517 qpair failed and we were unable to recover it. 00:37:49.517 [2024-11-27 05:54:45.941344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.517 [2024-11-27 05:54:45.941408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.517 [2024-11-27 05:54:45.941432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.517 [2024-11-27 05:54:45.941446] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.517 [2024-11-27 05:54:45.941457] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.517 [2024-11-27 05:54:45.951530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.517 qpair failed and we were unable to recover it. 00:37:49.517 [2024-11-27 05:54:45.961392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.517 [2024-11-27 05:54:45.961448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.517 [2024-11-27 05:54:45.961473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.517 [2024-11-27 05:54:45.961486] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.517 [2024-11-27 05:54:45.961498] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.517 [2024-11-27 05:54:45.971745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.517 qpair failed and we were unable to recover it. 00:37:49.517 [2024-11-27 05:54:45.981381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.517 [2024-11-27 05:54:45.981438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.517 [2024-11-27 05:54:45.981462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.517 [2024-11-27 05:54:45.981476] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.517 [2024-11-27 05:54:45.981487] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.517 [2024-11-27 05:54:45.991682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.517 qpair failed and we were unable to recover it. 00:37:49.517 [2024-11-27 05:54:46.001542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.517 [2024-11-27 05:54:46.001617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.517 [2024-11-27 05:54:46.001645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.517 [2024-11-27 05:54:46.001660] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.517 [2024-11-27 05:54:46.001671] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.517 [2024-11-27 05:54:46.011864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.517 qpair failed and we were unable to recover it. 00:37:49.517 [2024-11-27 05:54:46.021540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.517 [2024-11-27 05:54:46.021599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.517 [2024-11-27 05:54:46.021630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.517 [2024-11-27 05:54:46.021645] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.517 [2024-11-27 05:54:46.021656] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.517 [2024-11-27 05:54:46.031950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.517 qpair failed and we were unable to recover it. 00:37:49.517 [2024-11-27 05:54:46.041600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.517 [2024-11-27 05:54:46.041662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.517 [2024-11-27 05:54:46.041685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.517 [2024-11-27 05:54:46.041699] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.518 [2024-11-27 05:54:46.041710] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.518 [2024-11-27 05:54:46.051840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.518 qpair failed and we were unable to recover it. 00:37:49.518 [2024-11-27 05:54:46.061723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.518 [2024-11-27 05:54:46.061779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.518 [2024-11-27 05:54:46.061804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.518 [2024-11-27 05:54:46.061818] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.518 [2024-11-27 05:54:46.061830] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.518 [2024-11-27 05:54:46.072065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.518 qpair failed and we were unable to recover it. 00:37:49.518 [2024-11-27 05:54:46.081802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.518 [2024-11-27 05:54:46.081857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.518 [2024-11-27 05:54:46.081881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.518 [2024-11-27 05:54:46.081895] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.518 [2024-11-27 05:54:46.081913] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.518 [2024-11-27 05:54:46.091795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.518 qpair failed and we were unable to recover it. 00:37:49.518 [2024-11-27 05:54:46.101830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.776 [2024-11-27 05:54:46.101892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.776 [2024-11-27 05:54:46.101916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.776 [2024-11-27 05:54:46.101931] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.776 [2024-11-27 05:54:46.101942] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.776 [2024-11-27 05:54:46.112030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.776 qpair failed and we were unable to recover it. 00:37:49.776 [2024-11-27 05:54:46.121816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.776 [2024-11-27 05:54:46.121875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.776 [2024-11-27 05:54:46.121900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.776 [2024-11-27 05:54:46.121914] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.776 [2024-11-27 05:54:46.121926] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.776 [2024-11-27 05:54:46.132221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.776 qpair failed and we were unable to recover it. 00:37:49.776 [2024-11-27 05:54:46.141847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.776 [2024-11-27 05:54:46.141906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.776 [2024-11-27 05:54:46.141930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.776 [2024-11-27 05:54:46.141944] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.776 [2024-11-27 05:54:46.141956] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.776 [2024-11-27 05:54:46.152032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.776 qpair failed and we were unable to recover it. 00:37:49.776 [2024-11-27 05:54:46.161949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.776 [2024-11-27 05:54:46.162014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.776 [2024-11-27 05:54:46.162038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.776 [2024-11-27 05:54:46.162053] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.776 [2024-11-27 05:54:46.162065] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.776 [2024-11-27 05:54:46.172217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.776 qpair failed and we were unable to recover it. 00:37:49.776 [2024-11-27 05:54:46.181999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.776 [2024-11-27 05:54:46.182061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.777 [2024-11-27 05:54:46.182085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.777 [2024-11-27 05:54:46.182099] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.777 [2024-11-27 05:54:46.182111] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.777 [2024-11-27 05:54:46.192353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.777 qpair failed and we were unable to recover it. 00:37:49.777 [2024-11-27 05:54:46.202154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.777 [2024-11-27 05:54:46.202216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.777 [2024-11-27 05:54:46.202240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.777 [2024-11-27 05:54:46.202254] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.777 [2024-11-27 05:54:46.202266] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.777 [2024-11-27 05:54:46.212447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.777 qpair failed and we were unable to recover it. 00:37:49.777 [2024-11-27 05:54:46.222189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.777 [2024-11-27 05:54:46.222248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.777 [2024-11-27 05:54:46.222273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.777 [2024-11-27 05:54:46.222287] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.777 [2024-11-27 05:54:46.222298] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.777 [2024-11-27 05:54:46.232578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.777 qpair failed and we were unable to recover it. 00:37:49.777 [2024-11-27 05:54:46.242245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.777 [2024-11-27 05:54:46.242304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.777 [2024-11-27 05:54:46.242329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.777 [2024-11-27 05:54:46.242342] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.777 [2024-11-27 05:54:46.242354] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.777 [2024-11-27 05:54:46.252548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.777 qpair failed and we were unable to recover it. 00:37:49.777 [2024-11-27 05:54:46.262253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.777 [2024-11-27 05:54:46.262314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.777 [2024-11-27 05:54:46.262338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.777 [2024-11-27 05:54:46.262352] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.777 [2024-11-27 05:54:46.262363] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.777 [2024-11-27 05:54:46.272469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.777 qpair failed and we were unable to recover it. 00:37:49.777 [2024-11-27 05:54:46.282315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.777 [2024-11-27 05:54:46.282369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.777 [2024-11-27 05:54:46.282393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.777 [2024-11-27 05:54:46.282408] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.777 [2024-11-27 05:54:46.282420] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.777 [2024-11-27 05:54:46.292620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.777 qpair failed and we were unable to recover it. 00:37:49.777 [2024-11-27 05:54:46.304812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.777 [2024-11-27 05:54:46.304870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.777 [2024-11-27 05:54:46.304896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.777 [2024-11-27 05:54:46.304911] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.777 [2024-11-27 05:54:46.304923] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.777 [2024-11-27 05:54:46.312642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.777 qpair failed and we were unable to recover it. 00:37:49.777 [2024-11-27 05:54:46.322430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.777 [2024-11-27 05:54:46.322487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.777 [2024-11-27 05:54:46.322512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.777 [2024-11-27 05:54:46.322526] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.777 [2024-11-27 05:54:46.322537] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.777 [2024-11-27 05:54:46.332593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.777 qpair failed and we were unable to recover it. 00:37:49.777 [2024-11-27 05:54:46.342475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:49.777 [2024-11-27 05:54:46.342529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:49.777 [2024-11-27 05:54:46.342553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:49.777 [2024-11-27 05:54:46.342571] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:49.777 [2024-11-27 05:54:46.342582] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:49.777 [2024-11-27 05:54:46.352742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:49.777 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.362547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.362607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.036 [2024-11-27 05:54:46.362637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.036 [2024-11-27 05:54:46.362652] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.036 [2024-11-27 05:54:46.362664] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.036 [2024-11-27 05:54:46.372803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.036 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.382641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.382696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.036 [2024-11-27 05:54:46.382720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.036 [2024-11-27 05:54:46.382735] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.036 [2024-11-27 05:54:46.382747] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.036 [2024-11-27 05:54:46.393145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.036 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.402713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.402768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.036 [2024-11-27 05:54:46.402792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.036 [2024-11-27 05:54:46.402807] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.036 [2024-11-27 05:54:46.402818] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.036 [2024-11-27 05:54:46.412990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.036 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.422747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.422805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.036 [2024-11-27 05:54:46.422830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.036 [2024-11-27 05:54:46.422844] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.036 [2024-11-27 05:54:46.422859] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.036 [2024-11-27 05:54:46.433134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.036 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.442767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.442822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.036 [2024-11-27 05:54:46.442847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.036 [2024-11-27 05:54:46.442861] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.036 [2024-11-27 05:54:46.442873] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.036 [2024-11-27 05:54:46.455062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.036 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.462842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.462910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.036 [2024-11-27 05:54:46.462934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.036 [2024-11-27 05:54:46.462949] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.036 [2024-11-27 05:54:46.462962] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.036 [2024-11-27 05:54:46.473260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.036 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.482925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.482984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.036 [2024-11-27 05:54:46.483008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.036 [2024-11-27 05:54:46.483023] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.036 [2024-11-27 05:54:46.483034] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.036 [2024-11-27 05:54:46.493268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.036 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.503064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.503118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.036 [2024-11-27 05:54:46.503143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.036 [2024-11-27 05:54:46.503158] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.036 [2024-11-27 05:54:46.503170] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.036 [2024-11-27 05:54:46.513247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.036 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.522969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.523029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.036 [2024-11-27 05:54:46.523053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.036 [2024-11-27 05:54:46.523067] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.036 [2024-11-27 05:54:46.523079] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.036 [2024-11-27 05:54:46.533269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.036 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.543106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.543165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.036 [2024-11-27 05:54:46.543189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.036 [2024-11-27 05:54:46.543204] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.036 [2024-11-27 05:54:46.543216] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.036 [2024-11-27 05:54:46.553429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.036 qpair failed and we were unable to recover it. 00:37:50.036 [2024-11-27 05:54:46.563147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.036 [2024-11-27 05:54:46.563208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.037 [2024-11-27 05:54:46.563232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.037 [2024-11-27 05:54:46.563246] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.037 [2024-11-27 05:54:46.563258] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.037 [2024-11-27 05:54:46.573402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.037 qpair failed and we were unable to recover it. 00:37:50.037 [2024-11-27 05:54:46.583188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.037 [2024-11-27 05:54:46.583244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.037 [2024-11-27 05:54:46.583270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.037 [2024-11-27 05:54:46.583284] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.037 [2024-11-27 05:54:46.583296] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.037 [2024-11-27 05:54:46.593466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.037 qpair failed and we were unable to recover it. 00:37:50.037 [2024-11-27 05:54:46.605111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.037 [2024-11-27 05:54:46.605170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.037 [2024-11-27 05:54:46.605198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.037 [2024-11-27 05:54:46.605212] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.037 [2024-11-27 05:54:46.605223] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.037 [2024-11-27 05:54:46.613520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.037 qpair failed and we were unable to recover it. 00:37:50.295 [2024-11-27 05:54:46.623335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.295 [2024-11-27 05:54:46.623388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.295 [2024-11-27 05:54:46.623412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.295 [2024-11-27 05:54:46.623426] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.295 [2024-11-27 05:54:46.623438] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.295 [2024-11-27 05:54:46.633537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.295 qpair failed and we were unable to recover it. 00:37:50.295 [2024-11-27 05:54:46.643474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.295 [2024-11-27 05:54:46.643527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.295 [2024-11-27 05:54:46.643552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.295 [2024-11-27 05:54:46.643566] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.295 [2024-11-27 05:54:46.643578] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.295 [2024-11-27 05:54:46.653470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.295 qpair failed and we were unable to recover it. 00:37:50.295 [2024-11-27 05:54:46.663404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.295 [2024-11-27 05:54:46.663460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.295 [2024-11-27 05:54:46.663484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.295 [2024-11-27 05:54:46.663498] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.295 [2024-11-27 05:54:46.663510] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.673749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.296 [2024-11-27 05:54:46.683436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.296 [2024-11-27 05:54:46.683492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.296 [2024-11-27 05:54:46.683517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.296 [2024-11-27 05:54:46.683536] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.296 [2024-11-27 05:54:46.683547] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.693724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.296 [2024-11-27 05:54:46.703547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.296 [2024-11-27 05:54:46.703600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.296 [2024-11-27 05:54:46.703630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.296 [2024-11-27 05:54:46.703644] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.296 [2024-11-27 05:54:46.703656] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.713905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.296 [2024-11-27 05:54:46.723526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.296 [2024-11-27 05:54:46.723580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.296 [2024-11-27 05:54:46.723604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.296 [2024-11-27 05:54:46.723624] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.296 [2024-11-27 05:54:46.723636] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.733846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.296 [2024-11-27 05:54:46.743651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.296 [2024-11-27 05:54:46.743704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.296 [2024-11-27 05:54:46.743728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.296 [2024-11-27 05:54:46.743743] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.296 [2024-11-27 05:54:46.743754] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.755056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.296 [2024-11-27 05:54:46.763785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.296 [2024-11-27 05:54:46.763842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.296 [2024-11-27 05:54:46.763867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.296 [2024-11-27 05:54:46.763882] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.296 [2024-11-27 05:54:46.763893] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.773873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.296 [2024-11-27 05:54:46.783727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.296 [2024-11-27 05:54:46.783803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.296 [2024-11-27 05:54:46.783827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.296 [2024-11-27 05:54:46.783841] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.296 [2024-11-27 05:54:46.783853] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.793954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.296 [2024-11-27 05:54:46.803835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.296 [2024-11-27 05:54:46.803896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.296 [2024-11-27 05:54:46.803920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.296 [2024-11-27 05:54:46.803935] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.296 [2024-11-27 05:54:46.803947] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.813897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.296 [2024-11-27 05:54:46.823766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.296 [2024-11-27 05:54:46.823829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.296 [2024-11-27 05:54:46.823853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.296 [2024-11-27 05:54:46.823867] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.296 [2024-11-27 05:54:46.823878] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.834040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.296 [2024-11-27 05:54:46.843967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.296 [2024-11-27 05:54:46.844026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.296 [2024-11-27 05:54:46.844050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.296 [2024-11-27 05:54:46.844064] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.296 [2024-11-27 05:54:46.844076] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.854134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.296 [2024-11-27 05:54:46.863997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.296 [2024-11-27 05:54:46.864064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.296 [2024-11-27 05:54:46.864089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.296 [2024-11-27 05:54:46.864103] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.296 [2024-11-27 05:54:46.864114] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.296 [2024-11-27 05:54:46.874190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.296 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:46.884056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:46.884122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:46.884146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:46.884160] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:46.884172] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:46.894211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:46.904158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:46.904225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:46.904250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:46.904264] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:46.904276] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:46.914417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:46.924203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:46.924261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:46.924284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:46.924298] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:46.924310] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:46.934303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:46.944162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:46.944223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:46.944251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:46.944265] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:46.944277] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:46.954479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:46.964290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:46.964350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:46.964373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:46.964387] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:46.964399] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:46.974471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:46.984402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:46.984461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:46.984485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:46.984500] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:46.984511] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:46.994561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:47.004345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:47.004400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:47.004424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:47.004437] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:47.004449] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:47.014655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:47.024451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:47.024510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:47.024534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:47.024552] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:47.024564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:47.035051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:47.044680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:47.044737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:47.044761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:47.044774] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:47.044787] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:47.054737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:47.064776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:47.064829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:47.064853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:47.064867] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:47.064879] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:47.074939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:47.084731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:47.084786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:47.084811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:47.084825] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:47.084837] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:47.094965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:47.104718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:47.104780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:47.104804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:47.104819] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:47.104830] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:47.114921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:47.124877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:47.124935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:47.124958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.568 [2024-11-27 05:54:47.124972] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.568 [2024-11-27 05:54:47.124984] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.568 [2024-11-27 05:54:47.134976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.568 qpair failed and we were unable to recover it. 00:37:50.568 [2024-11-27 05:54:47.144986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.568 [2024-11-27 05:54:47.145049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.568 [2024-11-27 05:54:47.145073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.569 [2024-11-27 05:54:47.145088] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.569 [2024-11-27 05:54:47.145100] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.827 [2024-11-27 05:54:47.155028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.827 qpair failed and we were unable to recover it. 00:37:50.827 [2024-11-27 05:54:47.164989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.827 [2024-11-27 05:54:47.165053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.827 [2024-11-27 05:54:47.165077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.827 [2024-11-27 05:54:47.165091] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.827 [2024-11-27 05:54:47.165103] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.175078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.185060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.185120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.185144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.185159] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.185170] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.195157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.205112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.205172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.205196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.205210] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.205222] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.215294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.225254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.225315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.225339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.225353] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.225364] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.235350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.245249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.245304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.245328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.245342] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.245354] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.255370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.265256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.265315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.265340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.265354] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.265366] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.275560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.285293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.285348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.285376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.285390] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.285402] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.295371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.305307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.305369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.305394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.305408] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.305419] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.315598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.325353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.325410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.325434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.325448] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.325459] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.335614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.345457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.345515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.345539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.345554] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.345565] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.355928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.365466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.365525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.365549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.365564] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.365579] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.375651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.385541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.385598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.385629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.385644] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.385655] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:50.828 [2024-11-27 05:54:47.395813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:50.828 qpair failed and we were unable to recover it. 00:37:50.828 [2024-11-27 05:54:47.405675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:50.828 [2024-11-27 05:54:47.405732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:50.828 [2024-11-27 05:54:47.405756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:50.828 [2024-11-27 05:54:47.405770] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:50.828 [2024-11-27 05:54:47.405782] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.415906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.425748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.425810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.425834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.425847] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.425859] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.436066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.445643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.445707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.445732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.445746] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.445758] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.455914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.465776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.465836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.465860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.465875] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.465886] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.476045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.485838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.485897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.485922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.485937] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.485949] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.498621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.505990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.506057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.506082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.506096] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.506108] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.516058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.526022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.526089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.526113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.526127] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.526138] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.536221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.546158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.546225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.546249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.546263] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.546275] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.556172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.566025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.566080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.566103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.566118] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.566130] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.576280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.586169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.586226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.586250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.586264] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.586276] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.596259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.606233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.606296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.606320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.606334] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.606346] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.616300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.626256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.626309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.087 [2024-11-27 05:54:47.626337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.087 [2024-11-27 05:54:47.626352] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.087 [2024-11-27 05:54:47.626363] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.087 [2024-11-27 05:54:47.636515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.087 qpair failed and we were unable to recover it. 00:37:51.087 [2024-11-27 05:54:47.647438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.087 [2024-11-27 05:54:47.647494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.088 [2024-11-27 05:54:47.647519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.088 [2024-11-27 05:54:47.647533] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.088 [2024-11-27 05:54:47.647544] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.088 [2024-11-27 05:54:47.656371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.088 qpair failed and we were unable to recover it. 00:37:51.088 [2024-11-27 05:54:47.666322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.088 [2024-11-27 05:54:47.666380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.088 [2024-11-27 05:54:47.666404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.088 [2024-11-27 05:54:47.666418] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.088 [2024-11-27 05:54:47.666430] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.345 [2024-11-27 05:54:47.676929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.345 qpair failed and we were unable to recover it. 00:37:51.345 [2024-11-27 05:54:47.686408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.686468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.686492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.686505] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.686517] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.696704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.706484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.706537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.706560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.706574] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.706590] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.716765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.726654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.726711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.726735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.726749] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.726761] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.736755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.746655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.746717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.746741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.746755] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.746766] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.756703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.766593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.766656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.766681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.766695] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.766706] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.776869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.786738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.786799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.786822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.786836] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.786849] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.796981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.806818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.806879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.806903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.806917] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.806929] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.816906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.826876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.826937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.826961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.826976] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.826987] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.837149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.846927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.846986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.847010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.847024] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.847036] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.857050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.866952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.867009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.867033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.867048] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.867060] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.877223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.886967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.887029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.887053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.887067] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.887078] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.897243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.907086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.907139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.907163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.907177] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.907189] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.346 [2024-11-27 05:54:47.917221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.346 qpair failed and we were unable to recover it. 00:37:51.346 [2024-11-27 05:54:47.927176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.346 [2024-11-27 05:54:47.927229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.346 [2024-11-27 05:54:47.927253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.346 [2024-11-27 05:54:47.927267] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.346 [2024-11-27 05:54:47.927280] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.605 [2024-11-27 05:54:47.937453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.605 qpair failed and we were unable to recover it. 00:37:51.605 [2024-11-27 05:54:47.947265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.605 [2024-11-27 05:54:47.947327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.605 [2024-11-27 05:54:47.947352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.605 [2024-11-27 05:54:47.947367] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.605 [2024-11-27 05:54:47.947378] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.605 [2024-11-27 05:54:47.957430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.605 qpair failed and we were unable to recover it. 00:37:51.605 [2024-11-27 05:54:47.967233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.605 [2024-11-27 05:54:47.967292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.605 [2024-11-27 05:54:47.967321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.605 [2024-11-27 05:54:47.967335] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.605 [2024-11-27 05:54:47.967347] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.605 [2024-11-27 05:54:47.977528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.605 qpair failed and we were unable to recover it. 00:37:51.605 [2024-11-27 05:54:47.987291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.605 [2024-11-27 05:54:47.987353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.605 [2024-11-27 05:54:47.987377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.605 [2024-11-27 05:54:47.987390] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.605 [2024-11-27 05:54:47.987402] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.605 [2024-11-27 05:54:47.997458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.605 qpair failed and we were unable to recover it. 00:37:51.605 [2024-11-27 05:54:48.007399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.605 [2024-11-27 05:54:48.007458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.605 [2024-11-27 05:54:48.007481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.605 [2024-11-27 05:54:48.007496] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.605 [2024-11-27 05:54:48.007508] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.605 [2024-11-27 05:54:48.017585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.606 qpair failed and we were unable to recover it. 00:37:51.606 [2024-11-27 05:54:48.027388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.606 [2024-11-27 05:54:48.027446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.606 [2024-11-27 05:54:48.027471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.606 [2024-11-27 05:54:48.027485] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.606 [2024-11-27 05:54:48.027497] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.606 [2024-11-27 05:54:48.037789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.606 qpair failed and we were unable to recover it. 00:37:51.606 [2024-11-27 05:54:48.047510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.606 [2024-11-27 05:54:48.047566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.606 [2024-11-27 05:54:48.047590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.606 [2024-11-27 05:54:48.047605] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.606 [2024-11-27 05:54:48.047626] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.606 [2024-11-27 05:54:48.057588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.606 qpair failed and we were unable to recover it. 00:37:51.606 [2024-11-27 05:54:48.067532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.606 [2024-11-27 05:54:48.067593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.606 [2024-11-27 05:54:48.067631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.606 [2024-11-27 05:54:48.067645] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.606 [2024-11-27 05:54:48.067657] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.606 [2024-11-27 05:54:48.077714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.606 qpair failed and we were unable to recover it. 00:37:51.606 [2024-11-27 05:54:48.092674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.606 [2024-11-27 05:54:48.092736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.606 [2024-11-27 05:54:48.092761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.606 [2024-11-27 05:54:48.092775] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.606 [2024-11-27 05:54:48.092787] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.606 [2024-11-27 05:54:48.097689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.606 qpair failed and we were unable to recover it. 00:37:51.606 [2024-11-27 05:54:48.107636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.606 [2024-11-27 05:54:48.107702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.606 [2024-11-27 05:54:48.107727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.606 [2024-11-27 05:54:48.107741] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.606 [2024-11-27 05:54:48.107752] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.606 [2024-11-27 05:54:48.117869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.606 qpair failed and we were unable to recover it. 00:37:51.606 [2024-11-27 05:54:48.127734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.606 [2024-11-27 05:54:48.127794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.606 [2024-11-27 05:54:48.127818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.606 [2024-11-27 05:54:48.127833] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.606 [2024-11-27 05:54:48.127844] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.606 [2024-11-27 05:54:48.137877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.606 qpair failed and we were unable to recover it. 00:37:51.606 [2024-11-27 05:54:48.147723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.606 [2024-11-27 05:54:48.147780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.606 [2024-11-27 05:54:48.147805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.606 [2024-11-27 05:54:48.147819] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.606 [2024-11-27 05:54:48.147831] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.606 [2024-11-27 05:54:48.157923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.606 qpair failed and we were unable to recover it. 00:37:51.606 [2024-11-27 05:54:48.167925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.606 [2024-11-27 05:54:48.167988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.606 [2024-11-27 05:54:48.168012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.606 [2024-11-27 05:54:48.168027] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.606 [2024-11-27 05:54:48.168038] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.606 [2024-11-27 05:54:48.178014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.606 qpair failed and we were unable to recover it. 00:37:51.606 [2024-11-27 05:54:48.187875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.606 [2024-11-27 05:54:48.187934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.606 [2024-11-27 05:54:48.187958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.606 [2024-11-27 05:54:48.187972] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.606 [2024-11-27 05:54:48.187983] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.864 [2024-11-27 05:54:48.197915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.864 qpair failed and we were unable to recover it. 00:37:51.864 [2024-11-27 05:54:48.207933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.864 [2024-11-27 05:54:48.207988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.864 [2024-11-27 05:54:48.208011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.864 [2024-11-27 05:54:48.208025] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.864 [2024-11-27 05:54:48.208037] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.864 [2024-11-27 05:54:48.218336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.864 qpair failed and we were unable to recover it. 00:37:51.864 [2024-11-27 05:54:48.228050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.864 [2024-11-27 05:54:48.228111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.864 [2024-11-27 05:54:48.228136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.864 [2024-11-27 05:54:48.228150] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.864 [2024-11-27 05:54:48.228162] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.864 [2024-11-27 05:54:48.240226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.864 qpair failed and we were unable to recover it. 00:37:51.864 [2024-11-27 05:54:48.248148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.864 [2024-11-27 05:54:48.248203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.864 [2024-11-27 05:54:48.248228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.864 [2024-11-27 05:54:48.248243] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.865 [2024-11-27 05:54:48.248255] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:51.865 [2024-11-27 05:54:48.258440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:51.865 qpair failed and we were unable to recover it. 00:37:51.865 [2024-11-27 05:54:48.268349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.865 [2024-11-27 05:54:48.268424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.865 [2024-11-27 05:54:48.268465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.865 [2024-11-27 05:54:48.268487] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.865 [2024-11-27 05:54:48.268507] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:37:51.865 [2024-11-27 05:54:48.278640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:51.865 qpair failed and we were unable to recover it. 00:37:51.865 [2024-11-27 05:54:48.288285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.865 [2024-11-27 05:54:48.288357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.865 [2024-11-27 05:54:48.288385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.865 [2024-11-27 05:54:48.288403] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.865 [2024-11-27 05:54:48.288418] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:37:51.865 [2024-11-27 05:54:48.298641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:51.865 qpair failed and we were unable to recover it. 00:37:51.865 [2024-11-27 05:54:48.308369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.865 [2024-11-27 05:54:48.308439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.865 [2024-11-27 05:54:48.308470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.865 [2024-11-27 05:54:48.308489] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.865 [2024-11-27 05:54:48.308503] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:37:51.865 [2024-11-27 05:54:48.318908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:51.865 qpair failed and we were unable to recover it. 00:37:51.865 [2024-11-27 05:54:48.328356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.865 [2024-11-27 05:54:48.328424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.865 [2024-11-27 05:54:48.328451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.865 [2024-11-27 05:54:48.328468] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.865 [2024-11-27 05:54:48.328480] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:37:51.865 [2024-11-27 05:54:48.338743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:51.865 qpair failed and we were unable to recover it. 00:37:51.865 [2024-11-27 05:54:48.339073] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:37:51.865 A controller has encountered a failure and is being reset. 00:37:51.865 [2024-11-27 05:54:48.348565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.865 [2024-11-27 05:54:48.348642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.865 [2024-11-27 05:54:48.348681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.865 [2024-11-27 05:54:48.348702] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.865 [2024-11-27 05:54:48.348721] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:37:51.865 [2024-11-27 05:54:48.358759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:51.865 qpair failed and we were unable to recover it. 00:37:51.865 [2024-11-27 05:54:48.368471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:51.865 [2024-11-27 05:54:48.368534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:51.865 [2024-11-27 05:54:48.368560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:51.865 [2024-11-27 05:54:48.368577] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:51.865 [2024-11-27 05:54:48.368589] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:37:51.865 [2024-11-27 05:54:48.378831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:51.865 qpair failed and we were unable to recover it. 00:37:51.865 [2024-11-27 05:54:48.379118] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:37:51.865 [2024-11-27 05:54:48.425290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:37:51.865 Controller properly reset. 00:37:52.121 Initializing NVMe Controllers 00:37:52.122 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:52.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:52.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:52.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:52.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:52.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:52.122 Initialization complete. Launching workers. 00:37:52.122 Starting thread on core 1 00:37:52.122 Starting thread on core 2 00:37:52.122 Starting thread on core 3 00:37:52.122 Starting thread on core 0 00:37:52.122 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:52.122 00:37:52.122 real 0m12.139s 00:37:52.122 user 0m26.527s 00:37:52.122 sys 0m2.730s 00:37:52.122 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:52.122 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:52.122 ************************************ 00:37:52.122 END TEST nvmf_target_disconnect_tc2 00:37:52.122 ************************************ 00:37:52.378 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:37:52.378 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:37:52.378 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:52.378 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:52.378 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:52.378 ************************************ 00:37:52.378 START TEST nvmf_target_disconnect_tc3 00:37:52.378 ************************************ 00:37:52.378 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:37:52.378 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3599265 00:37:52.378 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:37:52.378 05:54:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:37:54.276 05:54:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3598166 00:37:54.276 05:54:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:37:55.651 Write completed with error (sct=0, sc=8) 00:37:55.651 starting I/O failed 00:37:55.651 Read completed with error (sct=0, sc=8) 00:37:55.651 starting I/O failed 00:37:55.651 Write completed with error (sct=0, sc=8) 00:37:55.651 starting I/O failed 00:37:55.651 Write completed with error (sct=0, sc=8) 00:37:55.651 starting I/O failed 00:37:55.651 Write completed with error (sct=0, sc=8) 00:37:55.651 starting I/O failed 00:37:55.651 Read completed with error (sct=0, sc=8) 00:37:55.651 starting I/O failed 00:37:55.651 Read completed with error (sct=0, sc=8) 00:37:55.651 starting I/O failed 00:37:55.651 Read completed with error (sct=0, sc=8) 00:37:55.651 starting I/O failed 00:37:55.651 Write completed with error (sct=0, sc=8) 00:37:55.651 starting I/O failed 00:37:55.651 Read completed with error (sct=0, sc=8) 00:37:55.651 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Write completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Write completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Write completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Write completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Write completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Write completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Write completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Write completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Write completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Write completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 Read completed with error (sct=0, sc=8) 00:37:55.652 starting I/O failed 00:37:55.652 [2024-11-27 05:54:52.047434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:37:56.218 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3598166 Killed "${NVMF_APP[@]}" "$@" 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3599953 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3599953 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3599953 ']' 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.218 05:54:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:56.477 [2024-11-27 05:54:52.859861] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:37:56.477 [2024-11-27 05:54:52.859959] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.477 [2024-11-27 05:54:53.040233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Write completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 Read completed with error (sct=0, sc=8) 00:37:56.477 starting I/O failed 00:37:56.477 [2024-11-27 05:54:53.053150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:37:56.735 [2024-11-27 05:54:53.144215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:56.735 [2024-11-27 05:54:53.144262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:56.735 [2024-11-27 05:54:53.144275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:56.735 [2024-11-27 05:54:53.144287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:56.735 [2024-11-27 05:54:53.144297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:56.735 [2024-11-27 05:54:53.146867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:56.735 [2024-11-27 05:54:53.146961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:56.735 [2024-11-27 05:54:53.147026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:56.735 [2024-11-27 05:54:53.147052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:57.301 Malloc0 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.301 05:54:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:57.301 [2024-11-27 05:54:53.819622] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7fa04351a940) succeed. 00:37:57.301 [2024-11-27 05:54:53.829517] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7fa0433bd940) succeed. 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Read completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 Write completed with error (sct=0, sc=8) 00:37:57.560 starting I/O failed 00:37:57.560 [2024-11-27 05:54:54.058806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:57.560 [2024-11-27 05:54:54.111994] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.560 05:54:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3599265 00:37:58.495 Read completed with error (sct=0, sc=8) 00:37:58.495 starting I/O failed 00:37:58.495 Read completed with error (sct=0, sc=8) 00:37:58.495 starting I/O failed 00:37:58.495 Write completed with error (sct=0, sc=8) 00:37:58.495 starting I/O failed 00:37:58.495 Write completed with error (sct=0, sc=8) 00:37:58.495 starting I/O failed 00:37:58.495 Read completed with error (sct=0, sc=8) 00:37:58.495 starting I/O failed 00:37:58.495 Write completed with error (sct=0, sc=8) 00:37:58.495 starting I/O failed 00:37:58.495 Read completed with error (sct=0, sc=8) 00:37:58.495 starting I/O failed 00:37:58.495 Read completed with error (sct=0, sc=8) 00:37:58.495 starting I/O failed 00:37:58.495 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Write completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Write completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Write completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Write completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Write completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Write completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Write completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Write completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 Read completed with error (sct=0, sc=8) 00:37:58.496 starting I/O failed 00:37:58.496 [2024-11-27 05:54:55.064439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:37:58.496 [2024-11-27 05:54:55.066317] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:58.496 [2024-11-27 05:54:55.066356] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:58.496 [2024-11-27 05:54:55.066369] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:59.870 [2024-11-27 05:54:56.070569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:37:59.870 qpair failed and we were unable to recover it. 00:37:59.870 [2024-11-27 05:54:56.072401] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:59.870 [2024-11-27 05:54:56.072434] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:59.870 [2024-11-27 05:54:56.072447] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:38:00.804 [2024-11-27 05:54:57.076473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:38:00.804 qpair failed and we were unable to recover it. 00:38:00.804 [2024-11-27 05:54:57.078173] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:00.804 [2024-11-27 05:54:57.078204] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:00.804 [2024-11-27 05:54:57.078217] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:38:01.739 [2024-11-27 05:54:58.082328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:38:01.739 qpair failed and we were unable to recover it. 00:38:01.739 [2024-11-27 05:54:58.084471] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:01.739 [2024-11-27 05:54:58.084517] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:01.739 [2024-11-27 05:54:58.084536] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:38:02.674 [2024-11-27 05:54:59.088695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.674 qpair failed and we were unable to recover it. 00:38:02.674 [2024-11-27 05:54:59.090768] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:02.674 [2024-11-27 05:54:59.090800] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:02.674 [2024-11-27 05:54:59.090813] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:38:03.607 [2024-11-27 05:55:00.095079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.607 qpair failed and we were unable to recover it. 00:38:03.607 [2024-11-27 05:55:00.097186] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:03.607 [2024-11-27 05:55:00.097222] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:03.607 [2024-11-27 05:55:00.097235] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:38:04.539 [2024-11-27 05:55:01.101297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:04.539 qpair failed and we were unable to recover it. 00:38:04.539 [2024-11-27 05:55:01.103343] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:04.539 [2024-11-27 05:55:01.103374] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:04.539 [2024-11-27 05:55:01.103387] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:38:05.549 [2024-11-27 05:55:02.107538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-11-27 05:55:02.109897] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:05.549 [2024-11-27 05:55:02.109943] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:05.549 [2024-11-27 05:55:02.109961] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:38:06.579 [2024-11-27 05:55:03.114057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:38:06.579 qpair failed and we were unable to recover it. 00:38:06.579 [2024-11-27 05:55:03.115926] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:06.579 [2024-11-27 05:55:03.115961] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:06.579 [2024-11-27 05:55:03.115974] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:38:07.953 [2024-11-27 05:55:04.120200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:38:07.953 qpair failed and we were unable to recover it. 00:38:07.953 [2024-11-27 05:55:04.122528] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:07.953 [2024-11-27 05:55:04.122579] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:07.953 [2024-11-27 05:55:04.122596] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:38:08.889 [2024-11-27 05:55:05.126687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.889 qpair failed and we were unable to recover it. 00:38:08.889 [2024-11-27 05:55:05.128471] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:08.889 [2024-11-27 05:55:05.128502] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:08.889 [2024-11-27 05:55:05.128515] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:38:09.826 [2024-11-27 05:55:06.132636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:38:09.826 qpair failed and we were unable to recover it. 00:38:09.826 [2024-11-27 05:55:06.132937] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:38:09.826 A controller has encountered a failure and is being reset. 00:38:09.826 Resorting to new failover address 192.168.100.9 00:38:09.826 [2024-11-27 05:55:06.133061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.826 [2024-11-27 05:55:06.133160] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:38:09.826 [2024-11-27 05:55:06.178747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:38:09.826 Controller properly reset. 00:38:09.826 Initializing NVMe Controllers 00:38:09.826 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:38:09.826 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:38:09.826 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:09.826 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:09.826 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:09.826 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:09.826 Initialization complete. Launching workers. 00:38:09.826 Starting thread on core 1 00:38:09.826 Starting thread on core 2 00:38:09.826 Starting thread on core 3 00:38:09.826 Starting thread on core 0 00:38:09.826 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:38:10.086 00:38:10.086 real 0m17.662s 00:38:10.086 user 0m58.839s 00:38:10.086 sys 0m4.671s 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:10.086 ************************************ 00:38:10.086 END TEST nvmf_target_disconnect_tc3 00:38:10.086 ************************************ 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:38:10.086 rmmod nvme_rdma 00:38:10.086 rmmod nvme_fabrics 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3599953 ']' 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3599953 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3599953 ']' 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3599953 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3599953 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3599953' 00:38:10.086 killing process with pid 3599953 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3599953 00:38:10.086 05:55:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3599953 00:38:11.992 05:55:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:11.992 05:55:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:38:11.992 00:38:11.992 real 0m41.678s 00:38:11.992 user 2m27.640s 00:38:11.992 sys 0m14.659s 00:38:11.992 05:55:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.992 05:55:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:11.992 ************************************ 00:38:11.992 END TEST nvmf_target_disconnect 00:38:11.992 ************************************ 00:38:11.992 05:55:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:11.992 00:38:11.992 real 8m22.428s 00:38:11.992 user 22m56.535s 00:38:11.992 sys 2m7.560s 00:38:11.992 05:55:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.992 05:55:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.992 ************************************ 00:38:11.992 END TEST nvmf_host 00:38:11.992 ************************************ 00:38:11.992 05:55:08 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:38:11.992 00:38:11.992 real 30m57.503s 00:38:11.992 user 87m8.232s 00:38:11.992 sys 7m56.528s 00:38:11.992 05:55:08 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.992 05:55:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:11.992 ************************************ 00:38:11.992 END TEST nvmf_rdma 00:38:11.992 ************************************ 00:38:11.992 05:55:08 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:38:11.992 05:55:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:11.992 05:55:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:11.992 05:55:08 -- common/autotest_common.sh@10 -- # set +x 00:38:12.252 ************************************ 00:38:12.252 START TEST spdkcli_nvmf_rdma 00:38:12.252 ************************************ 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:38:12.252 * Looking for test storage... 00:38:12.252 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:12.252 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:12.253 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:38:12.253 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:12.253 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:12.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.253 --rc genhtml_branch_coverage=1 00:38:12.253 --rc genhtml_function_coverage=1 00:38:12.253 --rc genhtml_legend=1 00:38:12.253 --rc geninfo_all_blocks=1 00:38:12.253 --rc geninfo_unexecuted_blocks=1 00:38:12.253 00:38:12.253 ' 00:38:12.253 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:12.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.253 --rc genhtml_branch_coverage=1 00:38:12.253 --rc genhtml_function_coverage=1 00:38:12.253 --rc genhtml_legend=1 00:38:12.253 --rc geninfo_all_blocks=1 00:38:12.253 --rc geninfo_unexecuted_blocks=1 00:38:12.253 00:38:12.253 ' 00:38:12.253 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:12.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.253 --rc genhtml_branch_coverage=1 00:38:12.253 --rc genhtml_function_coverage=1 00:38:12.253 --rc genhtml_legend=1 00:38:12.297 --rc geninfo_all_blocks=1 00:38:12.297 --rc geninfo_unexecuted_blocks=1 00:38:12.297 00:38:12.297 ' 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:12.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.297 --rc genhtml_branch_coverage=1 00:38:12.297 --rc genhtml_function_coverage=1 00:38:12.297 --rc genhtml_legend=1 00:38:12.297 --rc geninfo_all_blocks=1 00:38:12.297 --rc geninfo_unexecuted_blocks=1 00:38:12.297 00:38:12.297 ' 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.297 05:55:08 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:12.298 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3602608 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3602608 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 3602608 ']' 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.298 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:12.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:12.557 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.557 05:55:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:12.557 [2024-11-27 05:55:08.911141] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:38:12.557 [2024-11-27 05:55:08.911236] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602608 ] 00:38:12.557 [2024-11-27 05:55:09.060294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:12.815 [2024-11-27 05:55:09.162214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.815 [2024-11-27 05:55:09.162223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:38:13.383 05:55:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:38:23.361 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:38:23.361 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:38:23.361 Found net devices under 0000:d9:00.0: mlx_0_0 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:38:23.361 Found net devices under 0000:d9:00.1: mlx_0_1 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:38:23.361 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:38:23.362 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:23.362 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:38:23.362 altname enp217s0f0np0 00:38:23.362 altname ens818f0np0 00:38:23.362 inet 192.168.100.8/24 scope global mlx_0_0 00:38:23.362 valid_lft forever preferred_lft forever 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:38:23.362 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:23.362 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:38:23.362 altname enp217s0f1np1 00:38:23.362 altname ens818f1np1 00:38:23.362 inet 192.168.100.9/24 scope global mlx_0_1 00:38:23.362 valid_lft forever preferred_lft forever 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:38:23.362 192.168.100.9' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:38:23.362 192.168.100.9' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:38:23.362 192.168.100.9' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:23.362 05:55:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:23.363 05:55:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:23.363 05:55:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:23.363 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:23.363 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:23.363 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:23.363 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:23.363 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:23.363 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:23.363 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:38:23.363 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:38:23.363 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:23.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:23.363 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:23.363 ' 00:38:24.741 [2024-11-27 05:55:21.034982] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002af40/0x7f4baf5bd940) succeed. 00:38:24.741 [2024-11-27 05:55:21.045137] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002b0c0/0x7f4baf579940) succeed. 00:38:26.120 [2024-11-27 05:55:22.385236] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:38:28.649 [2024-11-27 05:55:24.632362] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:38:30.025 [2024-11-27 05:55:26.562708] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:38:31.926 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:31.926 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:31.926 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:31.926 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:31.926 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:31.926 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:31.926 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:31.926 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:38:31.926 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:38:31.926 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:31.926 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:31.926 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:31.926 05:55:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:31.926 05:55:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:31.926 05:55:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:31.927 05:55:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:31.927 05:55:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:31.927 05:55:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:31.927 05:55:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:38:31.927 05:55:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:32.212 05:55:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:32.212 05:55:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:32.213 05:55:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:32.213 05:55:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:32.213 05:55:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:32.213 05:55:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:32.213 05:55:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:32.213 05:55:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:32.213 05:55:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:32.213 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:32.213 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:32.213 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:32.213 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:38:32.213 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:38:32.213 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:32.213 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:32.213 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:32.213 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:32.213 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:32.213 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:32.213 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:32.213 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:32.213 ' 00:38:38.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:38.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:38.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:38.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:38.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:38:38.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:38:38.811 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:38.811 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:38.811 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:38.811 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:38.811 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:38.811 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:38.811 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:38.812 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3602608 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 3602608 ']' 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 3602608 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3602608 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3602608' 00:38:38.812 killing process with pid 3602608 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 3602608 00:38:38.812 05:55:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 3602608 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:38:39.389 rmmod nvme_rdma 00:38:39.389 rmmod nvme_fabrics 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:38:39.389 00:38:39.389 real 0m27.219s 00:38:39.389 user 0m57.161s 00:38:39.389 sys 0m7.685s 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:39.389 05:55:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:39.389 ************************************ 00:38:39.389 END TEST spdkcli_nvmf_rdma 00:38:39.389 ************************************ 00:38:39.389 05:55:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:39.389 05:55:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:39.389 05:55:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:39.389 05:55:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:39.389 05:55:35 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:39.389 05:55:35 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:39.389 05:55:35 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:39.389 05:55:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:39.389 05:55:35 -- common/autotest_common.sh@10 -- # set +x 00:38:39.389 05:55:35 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:39.389 05:55:35 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:39.389 05:55:35 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:39.389 05:55:35 -- common/autotest_common.sh@10 -- # set +x 00:38:45.956 INFO: APP EXITING 00:38:45.957 INFO: killing all VMs 00:38:45.957 INFO: killing vhost app 00:38:45.957 INFO: EXIT DONE 00:38:48.493 Waiting for block devices as requested 00:38:48.493 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:48.493 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:48.752 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:48.752 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:48.752 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:48.752 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:49.012 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:49.012 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:49.012 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:49.271 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:49.271 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:49.271 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:49.531 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:49.531 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:49.531 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:49.531 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:49.790 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:53.985 Cleaning 00:38:53.985 Removing: /var/run/dpdk/spdk0/config 00:38:53.985 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:53.985 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:53.985 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:53.985 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:53.985 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:53.985 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:53.985 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:53.985 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:53.985 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:53.985 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:53.985 Removing: /var/run/dpdk/spdk1/config 00:38:53.985 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:53.985 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:53.985 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:53.985 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:53.985 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:53.985 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:53.985 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:53.985 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:53.985 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:53.985 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:53.985 Removing: /var/run/dpdk/spdk1/mp_socket 00:38:53.985 Removing: /var/run/dpdk/spdk2/config 00:38:53.985 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:53.985 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:53.985 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:53.985 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:53.985 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:53.985 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:53.985 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:53.985 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:53.985 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:53.985 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:53.985 Removing: /var/run/dpdk/spdk3/config 00:38:53.985 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:53.985 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:53.985 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:53.985 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:53.985 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:53.985 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:53.985 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:53.985 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:53.985 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:53.985 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:53.985 Removing: /var/run/dpdk/spdk4/config 00:38:53.985 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:53.985 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:53.985 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:53.985 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:53.985 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:53.985 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:53.985 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:53.985 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:53.985 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:53.985 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:53.985 Removing: /dev/shm/bdevperf_trace.pid3188369 00:38:53.985 Removing: /dev/shm/bdev_svc_trace.1 00:38:53.985 Removing: /dev/shm/nvmf_trace.0 00:38:53.985 Removing: /dev/shm/spdk_tgt_trace.pid3127949 00:38:53.985 Removing: /var/run/dpdk/spdk0 00:38:53.985 Removing: /var/run/dpdk/spdk1 00:38:53.985 Removing: /var/run/dpdk/spdk2 00:38:53.985 Removing: /var/run/dpdk/spdk3 00:38:53.985 Removing: /var/run/dpdk/spdk4 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3123574 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3125356 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3127949 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3128937 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3130289 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3130841 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3132225 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3132494 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3133165 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3139271 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3141007 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3141641 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3142484 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3143168 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3143965 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3144252 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3144548 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3144866 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3145983 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3149420 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3150243 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3150867 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3151089 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3152861 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3153000 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3154901 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3155035 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3155730 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3155822 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3156389 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3156587 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3158211 00:38:53.985 Removing: /var/run/dpdk/spdk_pid3158810 00:38:53.986 Removing: /var/run/dpdk/spdk_pid3159406 00:38:53.986 Removing: /var/run/dpdk/spdk_pid3164753 00:38:53.986 Removing: /var/run/dpdk/spdk_pid3170170 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3181621 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3182433 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3188369 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3188811 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3194376 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3201538 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3204538 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3217756 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3248130 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3253201 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3357665 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3364021 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3370754 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3382140 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3414881 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3420655 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3466795 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3468613 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3470408 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3472153 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3477956 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3485775 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3494940 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3496046 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3497121 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3498183 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3498714 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3504613 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3504620 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3510714 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3511348 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3511916 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3512711 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3512725 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3515147 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3517076 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3518929 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3520832 00:38:54.245 Removing: /var/run/dpdk/spdk_pid3522721 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3524652 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3531788 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3532438 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3534718 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3536176 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3545243 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3548076 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3554695 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3566168 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3566189 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3589614 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3589933 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3597041 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3597480 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3599265 00:38:54.246 Removing: /var/run/dpdk/spdk_pid3602608 00:38:54.246 Clean 00:38:54.506 05:55:50 -- common/autotest_common.sh@1453 -- # return 0 00:38:54.506 05:55:50 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:54.506 05:55:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:54.506 05:55:50 -- common/autotest_common.sh@10 -- # set +x 00:38:54.506 05:55:50 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:54.506 05:55:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:54.506 05:55:50 -- common/autotest_common.sh@10 -- # set +x 00:38:54.506 05:55:50 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:38:54.506 05:55:50 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:38:54.506 05:55:50 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:38:54.506 05:55:50 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:54.506 05:55:50 -- spdk/autotest.sh@398 -- # hostname 00:38:54.506 05:55:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:38:54.765 geninfo: WARNING: invalid characters removed from testname! 00:39:16.695 05:56:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:17.263 05:56:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:19.168 05:56:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:21.074 05:56:17 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:22.453 05:56:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:24.357 05:56:20 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:25.736 05:56:22 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:25.736 05:56:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:25.736 05:56:22 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:39:25.736 05:56:22 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:25.736 05:56:22 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:25.736 05:56:22 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:39:25.736 + [[ -n 3041971 ]] 00:39:25.736 + sudo kill 3041971 00:39:26.005 [Pipeline] } 00:39:26.021 [Pipeline] // stage 00:39:26.028 [Pipeline] } 00:39:26.044 [Pipeline] // timeout 00:39:26.051 [Pipeline] } 00:39:26.067 [Pipeline] // catchError 00:39:26.074 [Pipeline] } 00:39:26.091 [Pipeline] // wrap 00:39:26.099 [Pipeline] } 00:39:26.115 [Pipeline] // catchError 00:39:26.125 [Pipeline] stage 00:39:26.128 [Pipeline] { (Epilogue) 00:39:26.143 [Pipeline] catchError 00:39:26.145 [Pipeline] { 00:39:26.159 [Pipeline] echo 00:39:26.160 Cleanup processes 00:39:26.166 [Pipeline] sh 00:39:26.453 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:39:26.453 3624354 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:39:26.467 [Pipeline] sh 00:39:26.753 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:39:26.753 ++ awk '{print $1}' 00:39:26.753 ++ grep -v 'sudo pgrep' 00:39:26.753 + sudo kill -9 00:39:26.753 + true 00:39:26.766 [Pipeline] sh 00:39:27.051 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:27.051 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:39:33.622 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:39:37.971 [Pipeline] sh 00:39:38.257 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:38.257 Artifacts sizes are good 00:39:38.272 [Pipeline] archiveArtifacts 00:39:38.280 Archiving artifacts 00:39:38.424 [Pipeline] sh 00:39:38.724 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:39:38.740 [Pipeline] cleanWs 00:39:38.750 [WS-CLEANUP] Deleting project workspace... 00:39:38.750 [WS-CLEANUP] Deferred wipeout is used... 00:39:38.757 [WS-CLEANUP] done 00:39:38.760 [Pipeline] } 00:39:38.778 [Pipeline] // catchError 00:39:38.790 [Pipeline] sh 00:39:39.075 + logger -p user.info -t JENKINS-CI 00:39:39.085 [Pipeline] } 00:39:39.102 [Pipeline] // stage 00:39:39.107 [Pipeline] } 00:39:39.124 [Pipeline] // node 00:39:39.131 [Pipeline] End of Pipeline 00:39:39.163 Finished: SUCCESS